Project
“콘텐츠 환경에서의 사용자의 내적 상태 비교 분석을 통한 사용자 모델링 프레임워크 설게 및 응용 연구 (Comparative Analysis and Modeling of User's Internal States for Multimedia Content: Framework Proposal and Application Study),"
Sponsor: National Research Foundation of Korea
Principal investigator (PI): Youjin Choi
본 연구는 다양한 상태(예, 몰입, 집중, 인지 부하, 감정 등)의 비교/분석을 통하여 상황 별 사용자 경험의 핵심 내부 상태와 상태들 사이의 관계를 파악하는 프레임워크를 설계하고 검증하는 응용 연구를 목표로 한다.
Research
Research Interest
: Human Computing Interaction (HCI), User Modeling, Affective Computing, Disability, Gamification
HCI, User modeling
IEEE Transactions on Affective Computing, Immersion Measurement in Watching Videos Using Eye-tracking Data Immersion plays a crucial role in video watching, leading viewers to a positive experience, such as increased engagement and decreased fatigue. However, few studies measure immersion while watching videos, and questionnaires are typically used in the measurement of immersion for other applications. These methods may rely on the viewer’s memory and cause biased results. Therefore, we propose an objective immersion detection model by leveraging people’s gaze behavior while watching videos. In a lab study with 30 participants, an in-depth analysis is carried out on a number of gaze features and machine learning (ML) models to identify the immersion state. Several gaze features are highly indicative of immersion and ML models with these features are able to detect an immersion state of video watchers. Post-hoc interviews demonstrate that our approach is applicable to measure immersion in the middle of watching a video, where some practical issues are discussed as well.
HCI, User modeling
Expert Systems With Applications, Diversifying dynamic difficulty adjustment agent by integrating player state models into Monte-Carlo tree search Game developers have employed dynamic difficulty adjustment (DDA) in designing game artificial intelligence (AI) to improve players’ game experience by adjusting the skill of game agents. Traditional DDA agents depend on player proficiency only to balance game difficulty, and this does not always lead to improved enjoyment for the players. To improve game experience, there is a need to design game AIs that consider players’ affective states. Herein, we propose AI opponents that decide their next actions according to a player’s affective states, in which the Monte-Carlo tree search (MCTS) algorithm exploits the states estimated by machine learning models referencing in-game features. We targeted four affective states to build the model: challenge, competence, valence, and flow. The results of our user study demonstrate that the proposed approach enables the AI opponents to play automatically and adaptively with respect to the players’ states, resulting in an enhanced game experience.
HCI, Disability, Gamification
CHI'22, We Play and Learn Rhythmically: Gesture-based Rhythm Game for Children with Intellectual Developmental Disabilities to Learn Manual Sign Manual sign systems have been introduced to improve the communication of children with intellectual developmental disabilities (IDD). Due to the lack of learning support tools, teachers face many practical challenges in teaching manual sign to children, such as low attention span and the need for persistent intervention. To address these issues, we collaborated with teachers to develop the Sondam Rhythm Game, a gesture-based rhythm game that assists in teaching manual sign language, and ran a four-week empirical study with fve teachers and eight children with IDD. Based on video annotation and post-hoc interviews, our game-based learning approach has the potential to be efective at teaching manual sign to children with IDD. Our approach improved children attention span and motivation while also increasing the number of voluntary gestures made without the need for prompting. Other practical issues and learning challenges were also uncovered to improve teaching paradigms for children with IDD.