Projects
We welcome all proposals that address the general themes of "AI, Social Justice and Public Decision-making" or "Science, Governability and Society."
We ask that you read this page carefully and consider a proposal that either addresses or draws inspiration from the projects below. Each of these projects is an attempt to address our general themes. You are not tied to these projects but we do ask that you give serious consideration to the opportunities that emerge from the interdisciplinary Programme's focus and character.
Applicants are advised to make early contact with one of the members of the LINAS team in order to discuss and develop a proposal that fits within both themes of LINAS and adopts its interdisciplinary approach.
-
The Psychological Consequences of Perceived Algorithmic Injustice
First Supervisor: Gary McKeown, School of Psychology
Second Supervisor: Deepak Padmanabhan, School of Electronics, Electrical Engineering and Computer Science
Thematic area: AI, Social Justice and Public Decision-making
Subject area: AI and Humans: Reckoning and Judgement
Project Overview
Increasingly decisions about people’s lives are being made by algorithms. Companies now exist that offer automated recruitment interviews that use emotion recognition software to judge people’s emotional intelligence. Similar emotion recognition algorithms are also being used to monitor prison populations in the criminal justice system of some countries. Online reviewers are assessed for the veracity, sentiment and value of their reviews, and there are attempts to automatically detect the personality characteristics of a reviewer. Reviews can be gathered from formal reviews websites and communities or scraped from social media as is typical with sentiment analysis companies. This project seeks to assess the impact of algorithmic assessment on humans through a series of psychological experiments, in the laboratory and where possible in real world settings. There is considerable debate in the world of emotion psychology concerning the degree to which we understand the nature of people’s emotions (Feldman Barrett et al., 2019). However, despite the lack of consensus within psychology about how we should interpret linguistic and social signals for their emotional meaning, the technological and computer science domains feel little compulsion to address the complexity or nuances of the arguments within psychology. A typical approach within the world of affective computing is to choose a psychological theory “off the shelf” and implement that theory uncritically. Theories that allow simple classification are preferred over more complex theoretical approaches as classification algorithms are much easier to implement within typical supervised machine learning paradigms. In assessing the sentiment of reviews and attributing personality characteristics to reviewers there is more common use of regression-based techniques, but often they are making assessments from a small amount of evidence and minimal context. In both contexts decisions that influence people’s lives can be based on weak theoretical foundations.
This project seeks to experimentally manipulate the style of algorithm, between classification and continuous dimensional assessment, and the degree to which the assessments match the human understanding of their own behaviour using classic experimental psychology paradigms. In the first two experiments the provision of feedback from mock automated job interviews in both services sector employment and graduate level employment will be manipulated. The feedback will either reflect or deviate from psychometric scores provided by the participants as part of the mock interview process. The degree of satisfaction or dissatisfaction with both the feedback and the providers of that feedback will be used as the experimental measures Additionally, there will be evaluations of the automated assessment process in terms of perceived agency, level of influence, level of frustration and trust in the process on the part of the participants. Two further experiments assess automated reviews on both reviewers and participants engaged in services style employment. The first experiment will provide feedback to reviewers of products, through automated assessment of their reviews; the feedback will provide a quality score for the review, a score of the emotional tone and its distance from other reviewers and a judgement of the reviewer’s personality ostensibly based on the reviews but drawing from psychometric measures to manipulate the degree to which the judgements concur with the psychometric personality scales. The second experiment will engage participants as workers who have been reviewed using an automated judgement system manipulating the review to be congruent or incongruent with performance. Two final experiments will seek to move these findings from the laboratory and into the field addressing the issues with companies who engage or are seeking to engage in this kind of automated assessment of people.
-
The Responsible Use of Generative AI When Working With Creative Stories of Lived Experience
First Supervisor: Austen Rainer, School of Electronics, Electrical Engineering and Computer Science
Second Supervisor: Paul Murphy, School of Arts, English and Languages
Note: we will also engage Dr Anthony Quinn, potentially as a third supervisor. Dr Quinn is currently at QUB on a Fellowship from the Royal Literary Fund.
Thematic Area: AI, Social Justice and Public Decision-making
Subject Area: AI and Humans: Reckoning and Judgement
Project Overview
Aim: The aim of the proposed project is to evaluate the degree to which already-deployed generative AIs, such as ChatGPT, (dis)empower the writer, reader, and translator in the creative expression and interpretation of lived experience.
Interdisciplinary dimension: Generative AIs have been described as “stochastic parrots” that formulate responses to prompts using statistical processes in which there is no (or very limited) traceability to the source corpus. When it comes to using generative AI to support self-expression through creative writing, there is the risk that this “statistical parroting” might encourage the writer to conform to statistically typical expressions, e.g., formulaic literary expression based on the nature of the source corpus, with all its subtle cultural and literary biases. One potential consequence is the writer is disempowered: their voice – the ways in which they uniquely and creatively communicate in writing – is lost in a kind of statistically-aggregated ‘chorus’; but their thinking too – the way they use language to make sense of – is ‘corrupted’ by a kind of statistically-normative ‘sense-making’. These consequences are particularly significant when expressing lived experience, e.g., in memoir – where the very act of expression can be an act of empowerment, even emancipation – and also particularly significant where the writer has limited writing experience, lacks confidence, or is constrained in other ways (e.g., age, disability) and thus turns to generative AIs, like ChatGPT, for assistance.
Similar risks and consequences arise for readers and for translators. For example, with Mahatma Gandhi’s autobiography, Gandhi’s native culture does not prioritise the individual’s lived experience in the way implied by the word autobiography (hence the book’s qualified title, The Story of My Experiments with Truth) yet an Anglo-Saxon cultural interpretation, and therefore generative AI ‘built’ on a corpus of Anglo-Saxon literature, would subtly prioritise interpretations centred on the individual. (Indigenous cultures, such as the Māori, also prioritise the community over the individual.)
Programme of Work: The project will bring together academics from the School of Arts, English and Languages, who have expertise in creative writing and memoir-writing (Dr Anthony Quinn), and the impact of arts-based interventions on public health (Dr Paul Murphy), with academics from EEECS (Prof. Austen Rainer) who have expertise in the empirical evaluation of algorithmic solutions.
The primary focus of the project is on the empirical investigation of the experiences of writers (stratified, e.g., novice writers, aged writers) who use existing generative AI to help them write creatively of their lived experience, and of the impact of using generative AI on the writer’s sense of empowerment and agency. A secondary focus – time and resources-permitting – would be on the experiences of readers and translators.
Envisaged impact: The project will raise the public’s understanding of the benefits and risks to (dis)empowerment through generative AI, help writers more accurately appreciate the strengths and limitations of generative AI in their (professional) work, and encourage greater awareness and appreciation amongst the AI community (including software engineers) of the limits of algorithmic solutions and, therefore, of the responsible use of generative AI in multicultural, global society.
Other relevant information
Briefly:
- There are emerging partnerships with Crescent Arts Centre in Belfast, the UK’s Royal Literary Fund (RLF) and professional writers in Northern Ireland and the wider UK.
- The project will benefit from prior work in the following areas:
- Two workshops that Dr Catherine Menon (University of Hertfordshire) and Austen Rainer have undertaken with emerging and professional writers. This was hosted by Crescent Arts Centre.
- Exploratory work that Dr Anthony Quinn (professional writer, RLF Fellow and former lecturer at QUB) has already undertaken with novice writers, with funding from the Irish Arts Council.
- Software engineering students using a memoir (My Name is Why, by Lemn Sissay) in their software projects, e.g., 1 x BSc student’s final-year project; 1 x MSc taught project; 1 x MEng research and development project; 155 x students working in teams.
- Cultural projects undertaken by Dr Paul Murphy and colleagues, e.g., Friel Reimagined.
- Also, Austen Rainer has recently completed an 80,000-word memoir, so is developing direct experience from both the software engineering perspective and the memoir-writing perspective.
- Law, Technology and Legal Practice: A Regional Case Study of How Technology Provides a Multi-layered and Differentiated Transformational Experience for Law Firms and Practitioners
First Supervisor: John Morison, School of Law
Second Supervisor: Thomas Schultze-Gerlach, School of Psychology
Thematic Area: AI, Social Justice and Public Decision-making
Subject Area: AI and Humans: Reckoning and Judgement
Project Overview
Algorithmically driven technologies are changing fundamentally the way that law is practised in all manner of ways. But this transformation is not experienced evenly. There is a world of difference between how ‘Big Law’ – those large, successful international legal firms - react to and develop technology and how medium sized law firms based in smaller cities and towns respond. Very small legal practices and solo practitioners are particularly challenged by the economies of scale required to compete even in traditional markets.
Drawing upon understandings of both law and technology that decline to see developments as occurring without reference to their immediate context, this essentially socio-legal project investigates the differential impact of technology on legal practice and the ways in which the legal market responds. While a comparative dimension would be welcome, Northern Ireland provides an excellent initial context for this investigation as within a separate jurisdiction the whole range of legal practice can be found inside a professional ecosphere where multi-national firms have a significant presence alongside more traditional medium and small sized general legal practices and a local bar.
Applicants will work in an inter-disciplinary context to acquire understandings of how legal technology is evolving before developing novel insights into how legal practice both sets the agenda for and responds to such developments.
Other relevant information
This project is closely related to an on-going research project being carried out by John Morison and Ciaran O’Kelly, with connections to a wider all-Ireland research team in the Republic of Ireland.
- Can We Have It All? Exploring the Trade-offs to Achieve Trustworthy AI in Future Networks
First Supervisor: Sandra Scott-Hayward, School of Electronics, Electrical Engineering and Computer Science
Second Supervisor: Muiris MacCarthaigh, School of History, Anthropology, Politics and Philosophy
Thematic Area: Science, Governability and Society
Subject Area: Cyber Security, 6G Networks, Machine Learning
Project Overview
Communication networks are expanding at a rapid rate to support an increasing number of users and services. Machine-learning-based solutions are fundamental to the design of future communication networks to meet the scale of connectivity. Where maximizing performance might once have been the singular goal of an ML-based system designer e.g., achieving the highest accuracy or the quickest decision, issues of security and challenges of explainability have expanded the design requirements. However, addressing security (e.g., adversarial training) can reduce the performance of an ML-based system. Similarly, the most explainable ML model might not be the most accurate.
Performance, security, and explainability are characteristics of Trustworthy Artificial Intelligence (TAI). TAI has many definitions ranging from the EU High-Level Expert Group on AI Ethics Guidelines for Trustworthy AI that it should be lawful, ethical, and robust, to the International Telecommunications Union (ITU) programme of work to standardize privacy-enhancing technologies such as federated learning.
Taking account of the broad characteristics of trustworthy AI, how do we trade-off between performance, security, and explainability to achieve trustworthy AI in our future communication networks? Considering the human as both end-user of the network and designer of the system, and considering the expansion towards non-terrestrial networks and associated political impacts, how do we approach this trade-off?
In this project, we will explore these questions in the application of ML-based network security solutions in 6G networks.
Other relevant information
This project aligns with the primary supervisor’s research with the CyberAI hub at the Centre for Secure Information Technologies (CSIT), the NICYBER DTP at CSIT, and activities with the IEEE P2863 Working Group on Algorithmic Governance of AI.
The student will have access to state-of-the-art network facilities and cyber range in CSIT.
-
Mitigating the Impact of Stellar Activity on Planetary Discovery
First Supervisor: Dr Ernst de Mooij, School of Mathematics and Physics
Second Supervisor: Professor Muiris MacCarthaigh, School of History, Anthropology, Philosophy and Politics
Thematic Area: Science, Governability and Society
Subject Area: Astrophysics / Ethics
Project Overview
The first exoplanet was discovered approximately 30 years ago. Since then, the field has made tremendous progress with thousands of new discoveries made using advancements in facilities. Dedicated surveys on novel instruments like HAPRS3 aim to push the discovery space to planets with roughly the same mass and surface temperature as the Earth. However, stellar activity is currently the main limitation for reaching these goals.
The aim of this project is to investigate novel methods to remove the impact of stellar activity on radial velocity observations. By expanding the ACID code developed within ARC to allow multi-line Least Squares Deconvolution the aim is to investigate the impact of separating lines that are strongly affected by stellar activity and lines that are not affected in measuring and correcting RVs.
The discovery of potentially habitable planets also raises ethical issues concerning how this knowledge is communicated to the public, as a planet being in the habitable zone does not guarantee it is habitable, something that can easily get lost in media interpretation. In addition, the pressure of showing exciting results could result in over- or mis-interpretation, especially as the competition to lead on new information can be fierce. To address this, there is a need to build a framework to allow for the responsible and transparent disclosure of research results while also protecting the integrity of researchers and their work.
-
What are the pre-conditions for ‘Algorithmic Limbo’?
First Supervisor: Professor Muiris MacCarthaigh, School of History, Anthropology, Philosophy and Politics
Second Supervisor: Deepak Padmanabhan, School of Electronics, Electrical Engineering and Computer Science
Thematic Area: AI, Social Justice and Public Decision-making
Subject Area: Public Administration, Computer Science
Project Overview
The increased prevalence of technology and digital services in government is now well established. While it brings many benefits and efficiencies, there are numerous problems associated with what might be termed ‘algorithmic government’. More specifically, there are increasing numbers of cases of citizens becoming trapped in states of bureaucratic ‘limbo’ arising from technology, which they cannot easily escape from. Some prominent examples of scandals involving such cases of limbo include the MIDAS system in Michigan (US), Childcare Benefits in the Netherlands, Robodebt in Australia and the recent Post Office Horizon IT case in the UK.
The purpose of this PhD project is to explore what causes citizens to become stuck in algorithmic limbo and, drawing on some of these and other high-profile cases, to examine the common pre-conditions and consequences of these cases that help us to better understand how technology is re-shaping public governance. The project will traverse public administration, computer science and political science.
- Mental Models of Story Thinking and Computational Thinking
First Supervisor: Austen Rainer, School of Electronics, Electrical Engineering and Computer Science
Second Supervisor: Jane Lugea, School of Arts, English and Languages
Thematic Area: AI, Social Justice and Public Decision-making
Subject Area: AI and Humans: Reckoning and Judgement
Project Overview
Mental models explain how people construct and elaborate mental representations of the discourse (or, as we come onto, program source code) with which they engage. When we read a story, we create and ‘execute’ a mental model, a storyworld. If the story is sufficiently engaging, we suspend our disbelief and are transported into the story: we ‘become’ the story. Similarly, when a human programmer reads the source code of a program, they go through a process of comprehending that source code: they build up a mental model of what is happening ‘in’ the code (see Heinonen et al, 2023, for a review). To quote Perlis (1982): “To understand a program you must become both the machine and the program.” (epigram #23). In both cases – the story and the program – the reader/programmer continually adapts their mental model as they progress through the text they are comprehending, e.g., retaining what is relevant and discarding other information.
The proposed project would investigate the mental models (e.g., Johnson-Laird, 2010) created and used in story thinking and those created and used in computational thinking. We expect the project to:
- Evaluate existing conceptual frameworks – e.g., Text World Theory (Lugea, 2016; Werth, 1999), schemas, or the integrated metal model of von Mayrhauser and Vans. (1995) – and either use an existing framework, synthesise a framework from prior work, or formulate a new framework.
- Use the (newly developed) framework to empirically compare and contrast the two kinds of mental model, e.g. the ‘objects’ that exist in each of these kinds of mental model, how these objects relate to each other within the mental model, and how these objects are, and are not, ‘allowed’ to behave. As an example of an empirical study, we can develop contrasting scenarios – e.g., short stories, pseudo code, scenarios used in software engineering –and ask participants questions about these scenarios to explore the mental models they have developed.
- Empirically investigate the thinking that these kinds of model both allow and disallow. Extending the example empirical study, we can ask participants to reason about the scenarios, e.g., to predict what might happen in the future.
By comparing these kinds of mental model, we expect to gain insight/s into the strengths and limitations of algorithmic thinking – in simple terms, an algorithm can be defined as a method to solve a problem that consists of exactly defined
instructions (Futschek, 2006) – and into the (much greater) breadth and flexibility of human’s thinking.
References
- Heinonen, A., Lehtelä, B., Hellas, A., & Fagerholm, F. (2023). Synthesizing research on programmers’ mental models of programs, tasks and concepts—A systematic literature review. Information and Software Technology, 107300
- Futschek, G. (2006). Algorithmic Thinking: The Key for Understanding Computer Science. Lecture Notes in Computer Science, 4226
- Johnson-Laird, P. N. (2010). Mental models and human reasoning. Proceedings of the National Academy of Sciences, 107(43), 18243-18250
- Lugea, J. (2016) World-building: Deixis and Modality in Spanish and English Spoken Narratives. London and New York: Bloomsbury
- Perlis, A. J. (1982). Special feature: Epigrams on programming. ACM Sigplan Notices, 17(9), 7-13
- Werth, P. (1999) Text Worlds: Representing Conceptual Space in Discourse. London: Longman
- von Mayrhauser, A., & Vans, A. M. (1995). Industrial experience with an integrated code comprehension model. Software Engineering Journal, 10(5), 171-182.
Other relevant information
- There are opportunities to collaborate with the School of Psychology on this project
- The proposed project would complement a just-started LINAS PhD project (investigating large language models’ ability to infer about a story) and a just-started EEECS PhD project (which will examine code reviews and program comprehension).
There are opportunities to collaborate with partners outside of QUB, e.g., the work that Rainer is doing with Menon, at University of Hertfordshire, on story thinking and computational thinking.
The Leverhulme Interdisciplinary Network on Algorithmic Solutions (LINAS) Doctoral Training Programme hosted an information session on 30 October 2024 ahead of its deadline for applications on 31 January 2025.
The recording of the Information Session can be viewed below: