Mozilla Festival is great for bringing some of the brightest minds together to explain and explore some of the biggest issues of our time. In this panel, hosted by reporter Xavier Harding, those minds turned their attention to biases and unethical uses of code, and how we could remedy it.
Joining Xavier on stage was Kathy Pham and Bitange Ndemo. Kathy is a computer scientist teaching at the Harvard Kennedy School, and co-leads the Ethical Tech Working Group at the Harvard Berkman Klein Centre. Bitange is an ITC Specialist and Associate Professor at the University of Nairobi.
Diving straight into the meat of the discussion, the panel was asked how we can deal with biased algorithms and tracking. Bitange felt that it wasn’t something that the government alone could do. Creating legal sandboxes to try and regulate technology ends up stifling creativity, and it needs to be a balancing act.
“It would be boring if there wasn’t a little bad. A little bad is good.”
Kathy was in agreement that it is something that has to be resolved by multiple stakeholders”
“It’s an ecosystem. We need policymakers, need leadership, need individual contributors.”
The list of stakeholders is only growing, as code further extends into our lives and impacts society as a whole. Code is no longer siloed as simply ‘technology’, and as such, there is a huge responsibility to to think about what happens, and we need people to help regulators understand the code that is being written.
Kathy presented the argument that we need to address the absence of ethics in code far earlier than just a change in company culture. ‘Culture and ethics is actually culture.’ If we teach ethics, and embed it in university culture, we will see that manifest in organisation cultures too.
Education currently is laned as such that if you study philosophy, or computer science, that’s what you study, and don’t stray from those lanes. Worse still, there is a hierarchy within those lanes, with some subjects are considered more important than others and have stereotypes attached. For example, if you study computer science, it’s immediately assumed that you must be highly intelligent, and if you keep hearing that, the egos grow, and you eventually assume that you know what’s best.
The conversation also touched on morality, and the people who do good, and those who do evil.
For example, it’s very easy to say that people who work on a project must be evil. However, if you work in a company, you may only be working on a small section of a larger puzzle, and you don’t know whether it provides a net good or net evil until everything comes together. There may well be an argument that such projects should examine their ethical implications before being built, but there we go. Similarly, people who hack, and would generally be considered evil, are also good because they make companies build a better product, and have created the whole industry of security.
They also briefly discussed about the moral standing of companies build technologies that get sold to governments, and how that is now becoming a question of ethics.
Is code neutral?
Short answer, no.
There is bias everywhere, and that will be reflected in code. For example, there was a hand dryer that could only recognise white hands. Was it intentional, or did it never come up in testing? Maybe they tested it on their own hands and they were white. The problem is with the datasets, and how they simply reflect real world biases. If there are fewer BAME people in coding, there are fewer chances for these biases to be picked up before launch.
Xavier asked the question of whether as a society we are doomed to repeat our problems forever. It used to be the case that black people couldn’t use something because of the law, and now it’s just because of technology biases.
Bitange suggested that racism was more blatant back them, but coding has now changed it. He further suggested that we must build substantial capacity in order to know that there is bias and we can correct it. Before a company uses code, for recruitment specifically but also generally, the code should be vetted to check for these biases.
But who should be responsible for vetting? The government or the company?
“I worked for the government, and I would not give it to the government. There are multi-stake arrangements where you bring multiple people in to look at it. The government would just smile and use it for other purposes that are more complex than I can say now.”
Kathy concurred that there should be multiple actors looking at this, again suggesting that the whole ecosystem should be responsible.
“It should fall on all of us. The people who have the skills and are building the products should be self-regulating, and the government has a role to to pass legislation that requires that responsibility, and questions should be asked about how legislation is enforced. GDPR is a good example.”
“The government has a role, for sure, it’s just a question of skill sets.”
Bitange added, “We are our own worst enemies when it comes to online. Whenever there is something that requires correction, I think another company will come and create a solution out of it. It usually happens that when we go for policy, we kill the whole thing.”
An audience member asked whether the biases in code be useful for surfacing hidden biases within wider society. The panel were hesitant to answer definitively. They agreed that surfacing the data points can certainly be useful if it is acted upon responsibility. However, if you add automated decision making based on the datasets, then it can become dangerous.
You can watch the whole talk on the embed below.
0 Comments