THE FUTURE IS HERE

Stanford’s institute ensuring AI ‘represents humanity’ lacks diversity

An institute established by Stanford University to address concerns that AI may not represent the whole of humanity is lacking in diversity.

The goal of the Institute for Human-Centered Artificial Intelligence is admirable, but the fact it consists primarily of white males brings into doubt its ability to ensure adequate representation.

Cybersecurity expert Chad Loder noticed that not a single member of Stanford’s new AI faculty was black. Tech site Gizmodo reached out to Stanford and the university quickly added Juliana Bidadanure, an assistant professor of philosophy.

Part of the institute’s problem could be the very thing it’s attempting to address – that, while improving, there’s still a lack of diversity in STEM-based careers. With revolutionary technologies such as AI, parts of society are in danger of being left behind.

The institute has backing from some big-hitters. People like Bill Gates and Gavin Newsom have pledged their support that “creators and designers of AI must be broadly representative of humanity.”

Fighting Algorithmic Bias

Stanford isn’t the only institution fighting the good fight against bias in algorithms.

Earlier this week, AI News reported on the UK government’s launch of an investigation to determine the levels of bias in algorithms that could affect people’s lives.

Conducted by the Centre for Data Ethics and Innovation (CDEI), the investigation will focus on areas where AI has tremendous potential – such as policing, recruitment, and financial services – but would have a serious negative impact on lives if not implemented correctly.

Meanwhile, activists like Joy Buolamwini from the Algorithmic Justice League are doing their part to raise awareness of the dangers which bias in AI poses.

In a speech earlier this year, Buolamwini analysed current popular facial recognition algorithms and found serious disparities in accuracy – particularly when recognising black females.

Just imagine surveillance being used with these algorithms. Lighter skinned males would be recognised in most cases, but darker skinned females would be mistakenly stopped more often. We’re in serious danger of automating profiling.

Some efforts are being made to create AIs which detect unintentional bias in other algorithms – but it’s early days for such developments, and they will also need diverse creators.

However it’s tackled, algorithmic bias needs to be eliminated before it’s adopted in areas of society where it will have a negative impact on individuals.

Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.

The post Stanford’s institute ensuring AI ‘represents humanity’ lacks diversity appeared first on AI News.