«

»

How a United Nations agency buried a security report that warned of potential genocide

By Simon Davies

This is the previously untold story of how a United Nations agency decided to bury internal research findings that warned of grave security risks to vulnerable people in its care – even to the extent of the threat of genocide – and how that agency continues to ignore those warnings. In telling this story I am in breach of a confidentiality agreement with the UN.

In 2008 the United Nations High Commission for Refugees (UNHCR) asked a colleague and I to conduct an assessment of a new automated fingerprinting technology that the agency was rolling out across the world to manage the flow of refugees. We were given special expert status, pumped with inoculations, provided with basic diplomatic immunity and shipped off to Africa and Asia to assess the privacy and security implications of the new scheme.

The technology had been forced onto populations that neither understood nor wanted it. The systems were not fit for purpose and they gave rise to substantial risk to life and liberty of hundreds of thousands of the most vulnerable people in the world.

Our target was to write a full Privacy Impact Assessment (PIA) on the technology, but the project ended up going far beyond that brief. At the heart of our efforts was the daunting challenge of determining how complex new technologies function in some of the most hostile environments on earth. More importantly, how does a complex bureaucracy such as UNHCR manage the people and the technology in a responsible and effective way?

We visited UNHCR offices, camps and reception centres in Ethiopia, Malaysia, Kenya and Djibouti, traipsing through deserts, trouble-spots and fly-blown ghettos to try and make sense of this enormity of the task. It was to be a labour of love, but we were assured that our work would be taken seriously by the agency.

What we discovered was deeply troubling. The technology had been forced onto populations that neither understood nor wanted it. The systems were not fit for purpose and they gave rise to substantial risk to life and liberty of hundreds of thousands of the most vulnerable people in the world. We recommended wholesale changes to the way UNHCR assessed and handled the technology, and we urged a radical shift in the way it viewed the security risks to the people whose safety was its responsibility.

Biometrics systems are being deployed around the world, particularly for use at borders to regulate who may or may not have access to a country.  Most notably, the U.S. has established its  VISIT system, followed by a similar system in Japan. The European Union meanwhile has implemented a number of systems for managing migrants, and has plans for more.   As the UNHCR was undoubtedly aware, such systems that regulate migrants’ access to safer havens must be carefully designed and tightly regulated.

Had the UNHCR’s system been implemented in any of these countries, it would have been declared illegal. The UNHCR’s Automated Fingerprint Identification System (AFIS) operated in the absence of protections, lacked even the most basic safeguards, and most importantly, lacked the necessary framework of policy.  In failing to observe these basic safeguards the AFIS failed to meet the standards that the UNHCR would advocate countries to respect.  As a result, we reported that this situation could garner international attention that would seriously harm the reputation of the UNHCR.

UNHCR’s response was not merely to ignore our findings, but to bury them. Four years since the mission began our document has still not seen the light of day and the UN is no closer to adopting responsible privacy and security standards. Even worse, substantial follow-up documentation by field managers appears also to have been ignored and buried.

This was an unexpected and astonishing outcome. Given the sensitive nature and the scale of information being collected by the UNHCR the potential threat to refugees arising from misuse of the personal information cannot be overstated. The loss or theft of an entire database – a situation that we regarded as quite possible – could have devastating consequences. Were this information to fall into the hands of a malicious party, vulnerable refugees would potentially be at great risk. It follows that the consequent collapse of trust in UNHCR’s work would be cataclysmic

Reputational issues aside, however, we had to seriously consider the reasons why such a system would be illegal in other contexts.  The duty of the UNHCR is to protect the life and dignity of individuals.  This is also the exact purpose of privacy laws and other constitutional protections.  A failure to protect the privacy of the individual, therefore, can generate grave risks to the life and dignity of those whom UNHCR is trying to protect.

The loss or theft of an entire database – a situation that we regarded as quite possible – could have devastating consequences.

If implemented sensitively and intelligently, biometrics could enhance the protection of the individual and endow a refugee with greater rights.  If done poorly, however, it could leave him or her at even greater risk than before.

It is all too easy for organisations in a protective role to inadvertently use their mandate as a weapon to override decisions that should have been taken on a scientific basis, and even easier to justify bad practice on the basis of good intention. However our research within the organisation indicates that staff members have a naïve view of the way biometrics work, more based on science fiction than scientific assessment.  Our concern was that poor decisions have been made that challenge not only the ability of the UNHCR to fulfil its protection mandate, but also the security and privacy of the people it seeks to protect.

We observed that if UNHCR is to continue deploying technologies that produce uncertain results then it will be necessary for the organisation to adopt a rigorous regime of assessment and supervision based on a transparent framework of rights. To do less would be to fail its duty of protection to some of the world’s most vulnerable people. It can be regarded as an anomaly of modern times that despite its role as a large-scale collector and processor of sensitive personal information, the UNHCR – like other UN organisations – has never adopted a privacy or data protection framework for its own operations. The UN itself falls outside the purview of many legal instruments and is not subject to the demands or oversight of any particular jurisdiction but its own.

The UN itself falls outside the purview of many legal instruments and is not subject to the demands or oversight of any particular jurisdiction but its own.

This report identified a number of fundamental weaknesses in UNHCR’s approach to data security and data privacy. These weaknesses will increase the risk of mishandling, loss or inappropriate use of personal information. As a result, two major risks became apparent:
•    To the people to whom UNHCR has a responsibility, a danger that data falling into the wrong hands could result in persecution, discrimination or even imminent threat to liberty and life.

Taking aside for a moment the human tragedy associated with such an outcome, the consequent erosion of trust in, and reputation of UNHCR could create serious long-term consequences for the work of the organisation.


•    To UNHCR as an entity, a risk that the humanitarian reputation of trust built over decades could be severely compromised or destroyed.
There are several scenarios that we identified that could create either or both outcomes:
•    Data are lost en masse on a laptop or other device, which is then sold to bounty hunters, fraudsters, profiteers or human traffickers.
•    Data are acquired by host governments which then use the information to assist efforts to imprison or persecute populations.
•    Stolen or lost files become associated with an act of genocide or execution.

Taking aside for a moment the human tragedy associated with such an outcome, the consequent erosion of trust in, and reputation of UNHCR could create serious long-term consequences for the work of the organisation.

We reminded UNHCR that one key motivation for the creation of privacy and data standards was protection against association between misuse of personal information and extinction of populations. The significance of this connection cannot be overstated. The misuse of population databases and numbering systems has been historically linked to murder and genocide across the world, a relationship that UNHCR should regard with the deepest concern.

After eight drafts the report hit a brick wall. In a 2009 teleconference with UNHCR directors we urged that the organisation adopt the findings of the paper and publish at least a summary of the findings as a warning to other agencies. Nothing came of our pleas.

Whether or not the AFIS technology has been modified, suspended or extended is beside the point. What matters here are principles of governance and openness that UNHCR has failed to uphold.