Transparency and Audibility, along with Pseudo-anonymity — Are Key To A Trusted World

Do you trust your government not to snoop on you? Well if you are in Estonia or Finland, the answer is possibly “yes”, but in many other…

Transparency and Audibility, along with Pseudo-anonymity — Are Key To A Trusted World

Do you trust your government not to snoop on you? Well, if you are in Estonia or Finland, the answer is possibly, “Yes!”, but in many other countries, governments do not have a strong track record in making sure that citizens trust the governments to handle their data in a trusted way. In Figure 1, we see varying levels of trust within the handling of health care data in various EU countries, and where Finland and the Netherlands score well.

Figure 1: Trust in dealing with health-related data [3]

In the UK an anonymous blog post virtually killed-off the roll-out of a citizen ID system, as it outlined how the government would use the system to spy on their citizens. A nightmare world thus involves the tax collector looking at your health record, and your GP looking at your tax return. To increase trust, we need to develop ways that data can be linked across government systems (and generally across the Internet), but not be used to spy on citizens.

With the ever-increasing number of data breaches, users are generally losing trust in the way that their data is handled. In fact a recent survey showed that only 3% of users in Europe actually trusted their cloud service provider to look after there data. We must thus increasingly anonymise users with their identities into pseudonyms, and where I am known on one system as “8043431” and on another system as “6ab-89dg”.

In many systems, though, such as in health care, we need to link these pseudonyms together. This might involve data related to a claim for benefits that might involve a check on income being correlated with health records. A centralised controller is then responsible to match these.

Figure 2

But the controller, as Camenisch points out in [1], the centralised controller (Trent) becomes powerful and will learn the mappings between the data infrastructures. In the linker is the government, it can then link disparate databases together and thus risks a large-scale breach of the whole data infrastructure, and could allow government officials to see data that they should not have access to. Within [2], the authors thus define an (Un) linkable Pseudonyms for Governmental Databases (known as CL-15), and where it is only possible to trace the servers which are linking the identifies, and where the linker cannot determine whether two requests relate to the same ID:

Figure 3

While this method works well for privacy, we still have a problem, and that is related to transparency. With this Bob actually wants to know who is snooping on him, and for the system to log that someone in the HMRC has linked his health record. In Estonia and Japan (“MyPortal”) this is a core method used to increase trust, as all accesses to government records from officials are logged and the user can then prompt as to why the person accessed their records.

In the European Data Protection Directive, too, one of the three principles includes the right for a data subject to be informed when their personal data is being processed [1]. Thus, transparency is key to creating an infrastructure where trust can be built. Within a traditional linker system, this would be fairly easy, as Alice makes a request to the linker service (Trent), who then links Bob’s identities between the data infrastructure. At the same time, Trent will log Alice’s access and the reason for it. Bob can then see the reason that she access the linking of the data, and query if he thinks that it is unacceptable.

Figure 4:

But, this method is weak, as it leaves behind a trail of accesses which could be used to breach Bob’s privacy. Camenisch [1] thus proposes using the blind linker method [2] and then create a trusted audit service and which logs the blind mappings. In this way the controller saves to an audit log system, but makes sure that all the pseudonyms are unlinkable. Bob is then the only person who can reveal the linkages, and trace that Alice has been matching his records.

Figure 5: (Un)linkable auditing

The method is [2] works by creating a public key for each pseudonym, and where Bob has the associated private key to reveal these:

Figure 6:

Conclusions

Digital trust is important, but human trust is just as important. We need to increasingly find ways to bind the two together, and make sure that our citizens feel say in this information age. Here is a slide that we have been using for nearly a decade, and it is still relevant today. With digital trust we need to make sure we have strong rights and identities, and in human trust we need strong governance and services which our citizens trust:

References

[1] Camenisch, J., & Lehmann, A. (2017, April). Privacy-preserving user-auditable pseudonym systems. In 2017 IEEE European Symposium on Security and Privacy (EuroS&P) (pp. 269–284). IEEE.

[2] Camenisch, J., & Lehmann, A. (2015, October). (Un) linkable Pseudonyms for Governmental Databases. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security (pp. 1467–1479). ACM.

[3] Danish Council of Ethics. (2015). Research with health data and biological material in Denmark URL: http://www. etiskraad. dk/~/media/Etisk-Raad/en/Publications. Research-with-health-data-and-biological-material-in-Denmark-Statement-2015. pdf.