Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ahreum Lee is active.

Publication


Featured researches published by Ahreum Lee.


Human Movement Science | 2015

Fingerstroke time estimates for touchscreen-based mobile gaming interaction.

Ahreum Lee; Kiburm Song; Hokyoung Ryu; Jieun Kim; Gyuhyun Kwon

The growing popularity of gaming applications and ever-faster mobile carrier networks have called attention to an intriguing issue that is closely related to command input performance. A challenging mirroring game service, which simultaneously provides game service to both PC and mobile phone users, allows them to play games against each other with very different control interfaces. Thus, for efficient mobile game design, it is essential to apply a new predictive model for measuring how potential touch input compares to the PC interfaces. The present study empirically tests the keystroke-level model (KLM) for predicting the time performance of basic interaction controls on the touch-sensitive smartphone interface (i.e., tapping, pointing, dragging, and flicking). A modified KLM, tentatively called the fingerstroke-level model (FLM), is proposed using time estimates on regression models.


human factors in computing systems | 2015

Challenges for Wearable Camera: Understanding of the meaning behind photo-taking

Ahreum Lee

The wearable camera industry is facing low adoption rates due to concerns over the amount of data the devices collect and the inability to differentiate from mobile phones and digital cameras. To improve adoption rates, the perception of the wearable camera should be changed. This research attempts to portray mobile cameras as tools for personal experience sharing. A 50-day study was conducted to determine what types of experiences are meaningful for the users. These factors should be considered when designing wearable cameras for personal re-experiencing system.


Proceedings of the Asian HCI Symposium'18 on Emerging Research Collection - Asian HCI Symposium'18 | 2018

Emotional Tagging with Lifelog Photos by Sharing Different Levels of Autobiographical Narratives

Ahreum Lee; Hokyoung Ryu

The lifelogging camera continuously captures ones surroundings, therefore lifelog photos can form a medium by which to sketch out and share ones autobiographical memory with others. Frequently, the lifelog photos do not provide the context or significance of the situations to those not present when the photos were taken. This paper solicits the social value of the lifelog photos by proposing different levels of autobiographical narratives within Panofskys framework. By measuring the narrative engagement with questionnaire and the activation level of the lateral prefrontal cortex (LPFC), we have found that delivery the autobiographical narrative at the iconological level triggers the receivers empathetic response and emotional tagging of the sharers lifelog photos.


International Journal of Environmental Research and Public Health | 2018

Multisensory Integration Strategy for Modality-Specific Loss of Inhibition Control in Older Adults

Ahreum Lee; Hokyoung Ryu; Jae-Kwan Kim; Eunju Jeong

Older adults are known to have lesser cognitive control capability and greater susceptibility to distraction than young adults. Previous studies have reported age-related problems in selective attention and inhibitory control, yielding mixed results depending on modality and context in which stimuli and tasks were presented. The purpose of the study was to empirically demonstrate a modality-specific loss of inhibitory control in processing audio-visual information with ageing. A group of 30 young adults (mean age = 25.23, Standard Deviation (SD) = 1.86) and 22 older adults (mean age = 55.91, SD = 4.92) performed the audio-visual contour identification task (AV-CIT). We compared performance of visual/auditory identification (Uni-V, Uni-A) with that of visual/auditory identification in the presence of distraction in counterpart modality (Multi-V, Multi-A). The findings showed a modality-specific effect on inhibitory control. Uni-V performance was significantly better than Multi-V, indicating that auditory distraction significantly hampered visual target identification. However, Multi-A performance was significantly enhanced compared to Uni-A, indicating that auditory target performance was significantly enhanced by visual distraction. Additional analysis showed an age-specific effect on enhancement between Uni-A and Multi-A depending on the level of visual inhibition. Together, our findings indicated that the loss of visual inhibitory control was beneficial for the auditory target identification presented in a multimodal context in older adults. A likely multisensory information processing strategy in the older adults was further discussed in relation to aged cognition.


International Journal of Industrial Ergonomics | 2013

Personality and its effects on learning performance: Design guidelines for an adaptive e-learning system based on a user model

Jieun Kim; Ahreum Lee; Hokyoung Ryu


Journal of Visualized Experiments | 2018

Measuring the Kinematics of Daily Living Movements with Motion Capture Systems in Virtual Reality

Kyoungwon Seo; Ahreum Lee; Jieun Kim; Hokyoung Ryu; Hojin Choi


human factors in computing systems | 2013

The fingerstroke-level model strikes back: a modified keystroke-level model in developing a gaming ui for 4g networks

Kiburm Song; Jihoon Kim; Yoon-Han Cho; Ahreum Lee; Hokyoung Ryu; Jung-Woon Choi; Yong Joo Lee


HCIK '16 Proceedings of HCI Korea | 2016

My mental scrapbook: What to store, what to remember, and what to retrieve in the lifelog data

Ahreum Lee; Hokyoung Ryu


Interactions | 2015

Imagine lab, Hanyang University

Ahreum Lee; Kyoungwon Seo; Jieun Kim; Gyu Hyun Kwon; Hokyoung Ryu


QScience Proceedings | 2013

Mapping Mobile Learning in Space and Time

David Parsons; Jae Lim Ahn; Ahreum Lee; Kiburm Song; Miyoung Yoon

Collaboration


Dive into the Ahreum Lee's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gyu Hyun Kwon

Korea Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jihoon Kim

Pohang University of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge