Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jinook Oh is active.

Publication


Featured researches published by Jinook Oh.


Cognitive Psychology | 2015

Representing visual recursion does not require verbal or motor resources

Mauricio Martins; Zarja Muršič; Jinook Oh; W. Tecumseh Fitch

The ability to form and use recursive representations while processing hierarchical structures has been hypothesized to rely on language abilities. If so, linguistic resources should inevitably be activated while representing recursion in non-linguistic domains. In this study we use a dual-task paradigm to assess whether verbal resources are required to perform a visual recursion task. We tested participants across 4 conditions: (1) Visual recursion only, (2) Visual recursion with motor interference (sequential finger tapping), (3) Visual recursion with verbal interference--low load, and (4) Visual recursion with verbal interference--high load. Our results show that the ability to acquire and use visual recursive representations is not affected by the presence of verbal and motor interference tasks. Our finding that visual recursion can be represented without access to verbal resources suggests that recursion is available independently of language processing abilities.


Behavior Research Methods | 2017

CATOS (Computer Aided Training/Observing System): Automating animal observation and training

Jinook Oh; W. T. Fitch

In animal behavioral biology, an automated observing/training system may be useful for several reasons: (a) continuous observation of animals for documentation of specific, irregular events, (b) long-term intensive training of animals in preparation for behavioral experiments, (c) elimination of potential cues and biases induced by humans during training and testing. Here, we describe an open-source-based system named CATOS (Computer Aided Training/Observing System) developed for such situations. There are several notable features in this system. CATOS is flexible and low cost because it is based on free open-source software libraries, common hardware parts, and open-system electronics based on Arduino. Automated video condensation is applied, leading to significantly reduced video data storage compared to the total active hours of the system. A data-viewing utility program helps a user browse recorded data quickly and more efficiently. With these features, CATOS has the potential to be applied to many different animal species in various environments such as laboratories, zoos, or even private homes. Also, an animal’s free access to the device without constraint, and a gamified learning process, enhance the animal’s welfare and enriches their environment. As a proof of concept, the system was built and tested with two different species. Initially, the system was tested for approximately 10 months with a domesticated cat. The cat was successfully and fully automatically trained to discriminate three different spoken words. Then, in order to test the system’s adaptability to other species and hardware components, we used it to train a laboratory rat for 3 weeks.


Logopedics Phoniatrics Vocology | 2015

DigitalVHI—a freeware open-source software application to capture the Voice Handicap Index and other questionnaire data in various languages

Christian T. Herbst; Jinook Oh; Jitka Vydrová; Jan G. Švec

Abstract In this short report we introduce DigitalVHI, a free open-source software application for obtaining Voice Handicap Index (VHI) and other questionnaire data, which can be put on a computer in clinics and used in clinical practice. The software can simplify performing clinical studies since it makes the VHI scores directly available for analysis in a digital form. It can be downloaded from http://www.christian-herbst.org/DigitalVHI/


Journal of Comparative Psychology | 2018

Artificial visual stimuli for animal experiments: An experimental evaluation in a prey capture context with common marmosets (Callithrix jacchus).

Jinook Oh; Vedrana Šlipogor; W. Tecumseh Fitch

Experimenters often use images of real objects to simulate interactions between animal subjects or visual stimuli on a touchscreen to test animal cognition. However, the degree to which nonhuman animals recognize 2-D images as representing the corresponding real objects remains debated. The common marmoset monkey (Callithrix jacchus) has been described as a species that spontaneously shows natural behaviors to 2-D images, for example, grasping behaviors to insects and fear responses to snakes. In this study, we tested 10 monkeys with their favorite food item (crickets), 2-D images (a photo and videos of a cricket), and a 3-D plastic model to reevaluate marmoset’s spontaneous responses to 2-D images and to explore which artificial visual stimuli can motivate spontaneous interactions. The monkeys showed grasping behavior to the real cricket and the 3-D plastic model, but to none of the 2-D images. Our experiment suggests that depth information is the most important factor eliciting predatory behavior from the marmosets, and, therefore, a stimulus produced by a 3-D printer could be a good alternative when a spontaneous interaction or a convincing stimulus is required. Furthermore, this work serves as a cautionary tale for those using 2-D image presentations with marmosets, and perhaps other animal species.


Behavior Research Methods | 2018

A technological framework for running and analyzing animal head turning experiments

Jinook Oh; Marisa Hoeschele; Stephan A. Reber; Vedrana Šlipogor; Thomas Bugnyar; W. Tecumseh Fitch

Head turning experiments are widely used to test the cognition of both human infants and non-human animal species. Monitoring head turns allows researchers to non-invasively assess attention to acoustic or visual stimuli. In the majority of head turning experiments, the head direction analyses have been accomplished manually, which is extremely labor intensive and can be affected by subjectivity or other human errors and limitations. In the current study, we introduce an open-source computer program for measuring head directions of freely moving animals including common marmoset monkeys (Callithrix jacchus), American alligators (Alligator mississippiensis), and Mongolian gerbils (Meriones unguiculatus) to reduce human effort and time in video coding. We also illustrate an exemplary framework for an animal head turning experiment with common marmoset monkeys. This framework incorporates computer-aided processes of data acquisition, preprocessing, and analysis using the aforementioned software and additional open-source software and hardware.


NeuroImage | 2014

Fractal image perception provides novel insights into hierarchical cognition

M. J. Martins; Florian Ph.S. Fischmeister; E. Puig-Waldmüller; Jinook Oh; Alexander Geißler; Simon Robinson; W. T. Fitch; Roland Beisteiner


Psychology of Aesthetics, Creativity, and the Arts | 2013

Studying Aesthetics With the Method of Production: Effects of Context and Local Symmetry

Gesche Westphal-Fitch; Jinook Oh; W. Tecumseh Fitch


HardwareX | 2017

An open source automatic feeder for animal experiments

Jinook Oh; Riccardo Hofer; W. Tecumseh Fitch


arXiv: Computational Engineering, Finance, and Science | 2014

CATOS (Computer Aided Training/Observing System)

Jinook Oh


F1000Research | 2014

Discrimination of self-similar visual hierarchies activates the parieto-medial temporal pathway

Mauricio Martins; Florian Ph.S. Fischmeister; Estela Puig-Waldmueller; Alexander Geissler; Jinook Oh; Roland Beisteiner; W. Tecumseh Fitch

Collaboration


Dive into the Jinook Oh's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Roland Beisteiner

Medical University of Vienna

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alexander Geissler

Medical University of Vienna

View shared research outputs
Top Co-Authors

Avatar

Alexander Geißler

Medical University of Vienna

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge