Human-Computer HCI and HRI share another common denominator
June 5, 2019
Human-Computer Interaction (HCI) is the field focused on designing computer technology that aims to enhance user-experience and quality of interaction between human-users and computers . Similarly Human-Robot Interaction (HRI), tries to tackle the same issues, but for the interaction between users and robots . The two fields, share a great overlap on their ideas towards building technology that supports more natural, intuitive and efficient interaction between humans and machines. Inspired by research done in other domains and after numerous studies conducted after the 1980s , HCI and HRI share another common denominator in their research focus, which is personalization . Designing systems tailored to the user-needs has proven to play a key role in improving the quality of interaction as it affects both user’s and system’s efficiency as well as advances user-experience. In many scenarios the desired way of achieving personalization lies on the analysis of passive and/or implicit user feedback, acquired during interaction. Passive and implicit types of feedback such as physiological (eeg, emg, heart-rate etc.) and behavioral (posture, facial expression, speech analysis etc.) signals, tend to be a simplistic simulation of how humans understand and communicate with each-other, thus having proven value of encompassing important information regarding someones physical and cognitive state as well as his/hers intentions . Thanks to the rapid growth of technology during the last decade, designing interactions that rely on such types of passive feedback has become more meaningful than ever before. Recent advances in the field of Artificial Intelligence (AI) offer solutions that in many applications outperform significantly human performance and allow high quality reasoning performed under real-time conditions in many individual tasks. This fact, along with the dramatic evolution of wearable sensors have made such approaches minimally or completely non-invasive to the user, thus offering a much more natural interaction and a better user experience. Currently personalizationn in most intelligent systems, functions by utilizing either explicitly task-based metrics and metadata, such as task-performance or response-time or by offering a set of predefined customization options to the user . More recent intelligent systems, that initially emerged for gamimg purposes and are now vastly used across different domains, have started exploring single types of behavioral or physiological signals for understanding user’s physio-cognitive state. . However, inference based on multiple physio-cognitive modalities for system adaptation and personalization has not been explored extensively yet. The reason is that despite the technological innovations of ML on succeeding on individual tasks, understanding more complex patterns of human behavior that rely on overlapping information signals – as humans do – is still an open problem of very high dimentionality. Motivated by the aforementioned assumption, this Thesis makes an in depth analysis of current state-of-the-art technologies in intelligent user monitoring and personalization, based on physiological and behavioral signals. We discuss how passive and implicit input can be modeled for adaptive personalization and intelligent monitoring across various applications related to assistive technologies. Based on this analysis we propose a framework for modeling user’s physical and cognitive state towards adaptive personalization. For experimentation purposes we designed a cognitive task that challenges cognitive-flexibility across auditory, visual and textual types of input-stimuli. The proposed system aims to improve user’s performance on the cognitive task by adapting the task parameters using user’s passive and implicit input.