The UK government is launching an investigation to determine the levels of bias in algorithms that could affect people’s lives.
A browse through our ‘ethics’ category here on AI News will highlight the serious problem of bias in today’s algorithms. With AIs being increasingly used for decision-making, parts of society could be left behind.
Conducted by the Centre for Data Ethics and Innovation (CDEI), the investigation will focus on areas where AI has tremendous potential – such as policing, recruitment, and financial services – but would have a serious negative impact on lives if not implemented correctly.
Digital Secretary Jeremy Wright said:
“Technology is a force for good which has improved people’s lives but we must make sure it is developed in a safe and secure way.
Our Centre for Data Ethics and Innovation has been set up to help us achieve this aim and keep Britain at the forefront of technological development.
I’m pleased its team of experts is undertaking an investigation into the potential for bias in algorithmic decision-making in areas including crime, justice and financial services. I look forward to seeing the Centre’s recommendations to Government on any action we need to take to help make sure we maximise the benefits of these powerful technologies for society.”
Durham police are currently using AI for a tool it calls ‘Harm Assessment Risk’. As you might guess, the AI determines whether an individual is likely to cause further harm. The tool helps with decisions on whether an individual is eligible for deferred prosecution.
If an algorithm is more or less effective on individuals with different characteristics over another, serious problems would arise.
Roger Taylor, Chair of the CDEI, is expected to say during a Downing Street event:
“The Centre is focused on addressing the greatest challenges and opportunities posed by data driven technology. These are complex issues and we will need to take advantage of the expertise that exists across the UK and beyond. If we get this right, the UK can be the global leader in responsible innovation.
We want to work with organisations so they can maximise the benefits of data driven technology and use it to ensure the decisions they make are fair. As a first step we will be exploring the potential for bias in key sectors where the decisions made by algorithms can have a big impact on people’s lives.
I am delighted that the Centre is today publishing its strategy setting out our priorities.”
In a 2010 study, researchers at NIST and the University of Texas in Dallas found (PDF) algorithms designed and tested in East Asia are better at recognising East Asians, while those developed in Western countries are more accurate when detecting Caucasians.
Similar worrying discrepancies were highlighted by Algorithmic Justice League founder Joy Buolamwini during a presentation at the World Economic Forum back in January. For her research, she analysed popular facial recognition algorithms.
These issues with bias in algorithms need to be addressed now before they are used for critical decision-making. The public is currently unconvinced AI will benefit humanity, and AI companies themselves are bracing for ‘reputational harm’ along the way.
Interim reports from the CDEI will be released in the summer with final reports set to be published early next year.
Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.
The post UK government investigates AI bias in decision-making appeared first on AI News.