I wish the answer were a simple one.
The techno-utopian promise is that by removing pesky humans from the equation, who are inherently flawed, we remove bias. Cold, rational machines will be able to make decisions based purely on “facts” and “data”. Machines will not be influenced by prejudice, that very human behaviour.
Reality, however, has shown us time and time again that all technology does is encode our biases. Technology does not stand separate from us, it is an extension and reflection of who we are. What’s more, technology, and in particular, machine-learning can encode biases unbeknownst to us.
This leads to the most insidious type of exclusion, where the creators of the system may not be aware that they are creating flawed systems. It presents the biggest challenge yet to diversity and inclusion efforts as it is particularly hard to guard against.
The Detectable Bias
To illustrate the danger, consider a scenario where hiring managers are asked to explicitly list the rules they apply when sifting through anonymised CVs. Then picture what might happen when someone else reading those rules notices that a couple of, surprisingly sincere, hiring managers listed as exclusion factors the mention of “female sports” or “female colleges”.
It is very likely that that would raise alarm bells, Why are female sports or colleges an exclusion factor? Turns out the hiring managers believe that men make better developers and specifically screen for that. Well, since the rules were explicitly written down, they stand a chance of being detected and the bias can be corrected.
Data Governed Decisions
Now, consider the same scenario but instead of asking hiring managers to explicitly write down the rules, we train a machine-learning algorithm to pick candidates based on the organisation’s past choices. No human is involved in the creation of the selection rules.
We allow the data to govern the design of the system and, because of how machine learning algorithms work, the system cannot clearly explain why it makes certain decisions.
The biases of those hiring machines will be encoded in the decision network of the algorithm, hidden well out of sight. Nobody explicitly set out to build a biased machine, but the dependence on historical data leads to exactly that.
This scenario is not purely hypothetical. This is exactly what happened to Amazon when it tried to automate candidate selection, and it was forced to withdraw the system when people realised what it was doing and complained.
Changing Not Encoding
So what space is there for automation in diversity and inclusion efforts? Well, the type of automation we are able to do these days (i.e. machine-learning driven automation) is dependent on historical data. Diversity and inclusion, however, are about changing the past. They are about changing our broken behaviours, not encoding them.
We simply cannot depend solely on historical data to do that. Automation, nevertheless, does have a role to play.
Automation applied carefully and judiciously with guards against bias in data can help us to optimize resource usage. It can help automate the more mundane tasks.
It can help free up our time from the myriad of daily activities we do that do not bring real value to our lives. It can help us focus and dedicate more time to the decisions that matter, the decisions that affect people’s lives.
Automation is not the solution to bias, only we can fix our biases. Automation, however, can provide us with more space to improve ourselves.
At The Panoply, we are working hard at collecting data for the explicit reason of revealing biases and inequities. We know well, however, that the challenges that come afterwards are for humans, not machines to fix.
At the same time, the automation space is in constant evolution and there multiple ways to deploy it. Let’s keep having honest and open conversations about the risks and rewards to ensure that we are doing so safely, effectively and fairly.