The guest speaker is Dr Hyung Jin Chang (University of Birmingham).
Abstract
The progress of artificial intelligence relies on humans, both as teachers and beneficiaries. In my research on human-centred visual learning, the primary goal is to create vision-based algorithms that prioritise the usability and usefulness of AI systems by addressing human needs and requirements. A crucial aspect of this work involves comprehending human body pose, hand pose, eye gaze, and object interaction, as they provide valuable insights into human actions and behaviours. During this talk, I will discuss recent studies conducted by my research group, covering topics such as hand-object pose and shape estimation, 3D facial image rendering for gaze tracking, and body pose estimation. Additionally, I will introduce intriguing applications that leverage human-centred vision methodologies.
Speaker Bio
Dr Hyung Jin Chang is an Associate Professor in the School of Computer Science at the University of Birmingham and a Turing Fellow of the Alan Turing Institute. Before joining the University of Birmingham, he was a post-doctoral researcher at Imperial College London and received his PhD degree from Seoul National University. His research combines multiple artificial intelligence areas, including computer vision, machine learning, robotics, and intelligent human-computer interaction. His research career started focusing on theoretical underpinnings of machine learning, and it has converged on applying these aspects to more practical problems in visual surveillance, HCI, and robotics, with an emphasis on estimating human eye gaze, hand pose, body pose, kinematic structure, and 6D object pose etc. Recently, his research has focused on exploiting and making advances in computer vision and deep learning techniques to move toward intelligent human-robot/human-computer interaction based on visual data. Moreover, he extends his expertise to interdisciplinary research, where he applies cutting-edge deep learning technologies to fields like brain imaging analysis for diagnostic purposes and enhancing rehabilitation exercises.
This event is supported by the Queen's Agility Fund.