A PhD award that mixes law and computer science, and addresses discrimination
In October 2021, Shania Kirk will be starting a doctorate that combines law and computer science. It’s an exciting move into interdisciplinary work for Shania who completed both her undergraduate and taught postgraduate degrees at QUB Law.
Shania will be part of the LINAS Doctoral Training Partnership at QUB. LINAS’ cutting-edge research opportunities are made possible by funding of over 1 million pounds from the Leverhulme Trust, with matching studentship funding from Northern Ireland’s Department for the Economy.
Shania’s PhD project will look at how law and regulation might hold the companies who develop artificial intelligence (AI) accountable for unfair and discriminatory decisions.
Her case study will be the exam results algorithm used by the UK government to predict GCSE and A level results during the COVID-19 pandemic. The algorithm triggered public outrage for ‘baking in’ inequality by boosting results in more affluent postcode areas while students in poorer areas had their predicted results downgraded. If this algorithm had been tested for discrimination based on the characteristics currently protected under anti-discrimination law (eg, gender, race, sexual orientation), this would not have prevented the problem that occurred, which was said have a wealth bias.
Shania will argue for a legal obligation on AI developers to consider the risk that a particular algorithm may unfairly disadvantage people based on their socio-economic status.
Shania’s supervisors will be QUB Law’s Dr Ciarán O’Kelly and Dr Deepak Padmanabhan from the School of Electronics, Electrical Engineering and Computer Science. Speaking about Shania’s PhD, Dr O’Kelly says:
"This project sits right at the cutting edge of the challenges our societies face as we respond to innovation in AI and specifically in an era when economic inequalities are a matter of great concern. Shania’s project will make a major contribution to how law and regulation can address the potential for discriminatory effects in AI processes beyond protected categories like gender and race. We are very excited to welcome Shania to the programme and very much look forward to seeing the project develop."