Britain’s equality regulator is set to start monitoring the use of artificial intelligence (AI) by local authorities to ensure technologies are not discriminating against people.
There is evidence that bias built into algorithms can lead to less favourable treatment of people with protected characteristics such as race and sex.
The Equality and Human Rights Commission (EHRC) has announced that from October it will work with around 30 local authorities to understand how they are using AI to deliver essential services, such as benefits payments.
EHRC, which today published new guidance to help organisations avoid breaches of equality law, said they were concerned that automated systems were inappropriately flagging certain families as a fraud risk.
The equality regulator is also looking at how organisations use facial recognition technology, following concerns that the software may be disproportionately affecting people from ethnic minorities.
Marcial Boo, chief executive of the EHRC, said: ‘While technology is often a force for good, there is evidence that some innovation, such as the use of artificial intelligence, can perpetuate bias and discrimination if poorly implemented.
‘Many organisations may not know they could be breaking equality law, and people may not know how AI is used to make decisions about them.
‘It’s vital for organisations to understand these potential biases and to address any equality and human rights impacts.
‘As part of this, we are monitoring how public bodies use technology to make sure they are meeting their legal responsibilities, in line with our guidance published today. The EHRC is committed to working with partners across sectors to make sure technology benefits everyone, regardless of their background.’