Session Descriptions Day 1
.png)

Research in Computer Vision tends to develop in a siloed manner from the downstream societal impacts, where the kind of applications that are built upon Computer Vision research and their uses have largely been ignored.
In this talk, Abeba will discuss the downstream impact of AI research in general and Computer Vision in particular using empirical findings from analysis of three decades of Computer Vision research papers and downstream patents. She will present quantitative and qualitative analysis showing that Computer Vision research is powering mass surveillance. Abeba will highlight the ethical and societal implications of such work and the role that Computer Vision researchers might play in disrupting the ‘Computer Vision, surveillance’ pipeline.
Bio:
Dr Abeba Birhane founded and leads the TCD AI Accountability Lab (AIAL). She received her PhD in 2022 and is currently a Research Fellow at the School of Computer Science and Statistics in Trinity College Dublin. Her research focuses on AI accountability, with a particular focus on audits of AI models and training datasets – work for which she was featured in Wired UK and TIME on the TIME100 Most Influential People in AI list in 2023. Dr. Birhane also served on the United Nations Secretary-General’s AI Advisory Body and currently serves at the AI Advisory Council in Ireland.

The Internet of Things (IoT) is a disruptive technology that has fundamentally transformed our everyday life, including many exciting applications such as smart cities, smart homes, connected healthcare, etc. This revolution will not be viable if we cannot provide a secure connection. The current communication networks are protected by conventional cryptography, which is based on complicated mathematical algorithms and/or protocols. However, the IoT consists of many low-cost devices with limited computational capacity and battery power, which cannot afford costly cryptography.
Physical layer security (PLS) has demonstrated great potential in protecting IoT, because it can achieve security in a lightweight manner. This talk will present our recent research on PLS for IoT. In the first part, we will present an emerging device authentication technique based on radio frequency fingerprint identification (RFFI). There are minute, unique, and stable hardware impairments originating from the manufacturing process, which can be extracted as device fingerprints to authenticate the identity of IoT devices. We will elaborate on how deep learning is leveraged to enhance RFFI performance. In the second part, we will introduce key generation from wireless channels. The channel characteristics are unpredictable and dynamic, and their randomness can be exploited as the cryptographic keys to enable secure communications. Our research findings on experimental evaluation with practical wireless standards including WiFi and LoRa will be presented.
Bio:
Dr. Junqing Zhang is a Senior Lecturer (Associate Professor) at the University of Liverpool, UK. He received a Ph.D degree in Electronics and Electrical Engineering from Queen's University Belfast, UK in 2016. His research interests include the Internet of Things, wireless security, key generation from wireless channels, radio frequency fingerprint identification, and wireless sensing. He is a Senior Area Editor of IEEE Transactions on Information Forensics and Security and an Associate Editor of IEEE Transactions on Mobile Computing. He is a co-leader of Working Group 5 - Experiments and demonstrations of COST Action CA22168 - Physical layer security for trustworthy and resilient 6G systems (6G-PHYSEC). He was a recipient of the UK EPSRC New Investigator Award.

The nuclear sector is undergoing a renaissance that is driven by the need to ensure energy security and achieve net-zero targets. At the centre of this renaissance are digital technologies that represent significant innovations for the sector.
These include technologies such as artificial intelligence, digital twins, smart sensors, telecommunication technologies, and others. The argument is that these technologies are essential to the efficient operation and management of new nuclear designs, enabling economies of operation. This is fantastic, but these technologies introduce new cyber security risks that must be understood and addressed. In this talk, I’ll discuss the motivation for digitalisation and what it aims to achieve, the security risks, and argue that we need to think beyond the technology to address the security and resilience risks.
Bio: Professor Paul Smith’s research is concerned with the cyber security and resilience of critical networked systems, with a focus on digitalised critical infrastructures such as those found in the nuclear sector. He is interested in understanding the risks and benefits associated with digital innovation for critical infrastructures and investigating approaches to ensuring their resilience when subject to disruptions, such as cyber-attacks. He has published extensively on these issues and has taken a leading role in numerous national and international research projects. Paul is enthusiastic about knowledge exchange and, for example, collaborates extensively with the International Atomic Energy Agency (IAEA), supporting their computer security programme.

In cybersecurity, the open-world challenge refers to the dynamic and unpredictable nature of real-world network environments, where machine learning (ML)-based defences must generalise to unseen attacks and adapt to evolving system behaviour.
Most available research solutions are designed with the assumption that a network’s behaviour is stationary over time and all possible attack types are known (closed-world assumption). However, real-world environments operate in an open-world setting and are subject to phenomena such as Concept Drift and Adversarial Machine Learning (AML) attacks.
Concept Drift occurs when the statistical properties of data change over time due to new attack techniques, legitimate behavioral shifts, or evolving protocols, leading to model degradation. Adversarial Machine Learning attacks exploit vulnerabilities in ML models through evasion and poisoning, with the aim of bypassing the detection mechanisms.
Addressing these challenges requires the development of robust ML models capable of detecting novel threats, mitigating adversarial perturbations, and continuously evolving in response to changing network traffic patterns.
In this talk, we analyse these challenges and delve into practical approaches to enhance the reliability of ML-based cybersecurity solutions
This event is funded by the UK Government as part of the Cyber AI inititiative, through the New Deal for Northern Ireland. The funding is delivered on behalf of the Northern Ireland Office and the Department for Science, Innovation and Technology by Innovate UK.