Select Page

The Future of Life Institute announced in December 2018 that it had signed the Safe Face Pledge to help ensure that technologies for facial analysis will not be used in situations that might lead to bias or abuse, or be used as weapons. More information about the work of FLI can be viewed in the embedded PDF.

The Safe Face Pledge is designed to mitigate incidences of abuse of any technology that uses facial recognition, with signatories of the pledge adopting four key commitments.

Facial Recognition Technology

Facial recognition technology is a type of artificial intelligence. Most AI researchers agree that those who design and build systems of AI have a moral obligation to shape the implications of the technology and to be responsible for the use and potential misuse of those systems.

Matthew Ledvina has a keen interest in AI and works with a Venture Capital fund focusing on disrupting the lending marketplace with AI. The Safe Face Pledge requires people involved in the AI industry to take their moral obligations seriously when developing technologies that use facial recognition, helping to ensure that those technologies are not used for purposes that may cause harm.

Facial recognition technology works by using software to read facial geometry from a picture. This software then identifies a series of facial landmarks to distinguish a facial signature and identify the person in question. The infographic attachment looks at some of the key users of facial recognition software in the modern world.

The Four Commitments of the Safe Face Pledge

The Safe Face Pledge asks signatories to make four key commitments in the fight to prevent misuse and abuse of facial recognition software. Organisations demonstrate their commitment to following ethical principles in the design, construction and use of facial recognition technology firstly by showing value for human life, rights and dignity. Organisations are also asked to address harmful biases that may occur because of that technology, and to facilitate transparency. Finally, it is required that these commitments be embedded into business practices.

Concerns About Misuse of Facial Recognition Technology

The need for the Safe Face Pledge is evident in the potential for misuse or abuse of facial recognition technology, as well as the possibility that some technologies can be weaponised. Many of the concerns people have revolve around privacy and security issues. Consumers today are increasingly concerned about data protection, personal security and potential misuse of data. When facial recognition comes into the mix, we are adding even more personal data to the system and therefore increasing the potential for misuse, which may lead to a breach of security.

The privileges and rights of the people using the technology, whether intentionally or unintentionally, must be considered, according to FLI and the Safe Face Pledge. Facial recognition software is also not infallible and has led to incidents where members of the public have been publicly stopped and searched despite having done nothing wrong.

In the short video attachment, you can learn more about surveillance cameras in the UK.

System Inaccuracies

System inaccuracies with facial recognition technologies can result from a variety of factors, particularly through introducing incomplete training sets to the system that are not diverse enough. While the machines can learn, they can only at present learn from the information they are given; if humans do not input enough information, the margin of error for facial recognition can be high, in some cases as much as 35%.

FLI applauds the Safe Face Pledge and multiple other initiatives aimed at ensuring all issues and concerns with facial recognition technology and AI are being addressed.