MozFest 2022 took place last week and was chock-full of events from virtually every sector imaginable. Of particular interest to me was a subject that should come as no shock if you read my previous work – politics.
The event that caught my eye was the podcast ‘Let’s Get Litical - Does AI Have A Seat In Government?’. Hosted by Helen Femi, Borhane Blili-Hamelin and Umut Pajaro Valesquez join the podcast to discuss ethics, accountability and the future of AI.
I took the time to listen to and break down the key points of what is an absolutely fascinating discussion, regardless of how much expertise you have on politics, technology or indeed conversations around ethics.
What is ‘AI’?
Typically, AI is thought of in its corporate sense and how we can regulate its use in a commercial sense. This means that these corporations are often allowed to self-regulate their technologies and control it how they see fit.
The panel felt that the current way of thinking needs to be expanding to appreciate the multifaceted nature of AI.
One of the salient ethical considerations is the potential harm which could be caused by the technology.
For example, in 2014 Amazon used algorithms to automate and make the hiring process more efficient. Analysing the data sample provided, the algorithm detected successful candidates possessed certain qualities – one of which was being male. Consequently, the algorithm excluded women.
Another example is the ‘gig economy’ and in particular the food delivery industry such as Uber Eats. These companies often utilise algorithms, but they have little margin for error, and they punish severely – such as costing workers pay if they are late regardless of the circumstances.
Accountability is of course important, but the debate is who this accountability should lie at the feet of. Indeed, the experts suggest that everyone should be involved in this process and it be made accessible – from the designers through to us as the consumers.
This is especially important now, where these technologies are beginning to possess political power(s). Google for instance as a company is starting to accrue political power through its information collection, made possible by AI.
As previously mentioned, the companies themselves possess a disproportionate amount of influence. The legal system however is a good place to start and legislation is an excellent regulatory tool to control AI.
Facebook vs The Rohingya Community is an example of where the law was used in order to do just this The Rohingya Muslims in Myanmar sued the social-media giant for a total of approximately £150BN, alleging that their algorithm perpetuated hate speech and failed to take down inflammatory posts in the genocide of their people.
Naturally, legislation will always lag behind the technology but courts need to be empowered to do more. This is not helped by different jurisdictions across the world making different decisions regarding AI, something which prevents the development of a cohesive response and even fragments authority.
The other issue is how inaccessible the conversation surrounding the technology is, even to the courts. Mark Zuckerberg’s Supreme Court appearance is cited as an example of this disparity in expertise – courts are not even able to ask the right questions.
What is the future of AI?
We are coming up with increasingly innovative ways to approach this discussion but work is to be done. Whilst these technologies are developing however we’ll need to increase awareness and accountability, in order to keep the current key players in check.
As for whether the new technology can (or indeed should) have a seat in governments across the world? My interpretation of the podcast is that it certainly can but only if the necessary changes are made, changes which will increase awareness and accountability of AI.
Femi, Blili-Hamelin and Valesquez’s discussion was eye-opening and I would urge you to check it out, and some of the other enlightening events which were featured as part of the eclectic mix at this year’s MozFest.