Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel Hipp is active.

Publication


Featured researches published by Daniel Hipp.


Archive | 2017

The Dimensional Divide: Learning from TV and Touchscreens During Early Childhood

Daniel Hipp; Peter Gerhardstein; Laura Zimmermann; Alecia Moser; Gemma Taylor; Rachel Barr

As children’s exposure to touchscreen technology and other digital media increases, so does the need to understand the conditions under which children are able to learn from this technology. The prevalence of screen media in the lives of young children has increased significantly over the last two decades. The use of touchscreen devices among 2–4-year-olds in the USA increased from 39 to 80 % from 2011 to 2013 (Rideout, 2013). Despite frequent engagement with these devices, it is widely recognized that children exhibit a transfer deficit, a term coined to denote children’s consistently poorer learning from television and touchscreens relative to face-to-face interaction (see Barr, Developmental review 30(2):128–154, 2010; Barr, Child Development Perspectives 7(4):205–210, 2013). In this chapter, we focus on understanding the transfer deficit when children engage in imitative learning from touchscreens and television (e.g., Dickerson et al., Developmental Psychobiology 55(7):719–732, 2013, Moser et al., Journal of Experimental Child Psychology 137:137–155, 2015; Zack et al., British Journal of Developmental Psychology 27(Pt 1):13–26, 2009, Zack et al., Scandinavian Journal of Psychology 54(1):20–25, 2013; Zimmermann et al., Child Development, in press). Specifically, we discuss the role of child experience, perceptual and cognitive constraints, transfer distance, and social scaffolding in the transfer deficit. We conclude with lessons for parents and early educators regarding the strategies that may enhance learning across the dimensional divide.


Frontiers in Psychology | 2014

The development of contour processing: evidence from physiology and psychophysics.

Gemma Taylor; Daniel Hipp; Alecia Moser; Kelly Dickerson; Peter Gerhardstein

Object perception and pattern vision depend fundamentally upon the extraction of contours from the visual environment. In adulthood, contour or edge-level processing is supported by the Gestalt heuristics of proximity, collinearity, and closure. Less is known, however, about the developmental trajectory of contour detection and contour integration. Within the physiology of the visual system, long-range horizontal connections in V1 and V2 are the likely candidates for implementing these heuristics. While post-mortem anatomical studies of human infants suggest that horizontal interconnections reach maturity by the second year of life, psychophysical research with infants and children suggests a considerably more protracted development. In the present review, data from infancy to adulthood will be discussed in order to track the development of contour detection and integration. The goal of this review is thus to integrate the development of contour detection and integration with research regarding the development of underlying neural circuitry. We conclude that the ontogeny of this system is best characterized as a developmentally extended period of associative acquisition whereby horizontal connectivity becomes functional over longer and longer distances, thus becoming able to effectively integrate over greater spans of visual space.


Infant Behavior & Development | 2012

Early operant learning is unaffected by socio-economic status and other demographic factors: a meta-analysis.

Peter Gerhardstein; Kelly Dickerson; Stacie S. Miller; Daniel Hipp

The relation between SES (socioeconomic status) and academic achievement in school-aged children is well established; children from low SES families have more difficulty in school. However, few studies have been able to establish a link between SES and learning in infancy, and thus the developmental onset of SES effects remains unknown. The limited studies that have been conducted to explore the link between SES and learning in infancy have generated mixed results; some demonstrate a link between SES and learning in infants as young as 6-9 months (Smith, Fagan, & Ulvund, 2002) while others do not. Further, studies examining the genetic as well as environmental contributors to learning in infancy and early childhood suggest that the effect of SES is likely cumulative and that as children develop, the effect of a low SES environment will become more pronounced (Tucker-Drob, Rhemtulla, Harden, Turkheimer, & Fask, 2011). Using aggregated data from 790 infants collected across 18 studies, we examined the contribution of SES and other demographic factors to learning of an operant kicking task in 2-4-month-old infants in a meta-analysis. Results indicated that, at least with respect to operant conditioning, an infant is an infant; that is SES did not affect learning rate or ability to learn in infants under 4-months of age. SES effects may therefore be better characterized as cumulative, with tangible effects emerging sometime later in life.


international symposium on visual computing | 2014

Evaluation of Perceptual Biases in Facial Expression Recognition by Humans and Machines

Xing Zhang; Lijun Yin; Daniel Hipp; Peter Gerhardstein

In this paper, we applied a reverse correlation approach to study the features that humans use to categorize facial expressions. The well-known portrait of Mona Lisa was used as the base image to investigate the features differentiating happy and sad expressions. The base image was blended with sinusoidal noise masks to create the stimulus. Observers were required to view each image and categorized it as happy or sad. Analysis of responses using superimposed classification images revealed both locations and identity of information required to represent each certain expression. To further investigate the results, a neural network based classifier was developed to identify the expression of the superimposed images from the machine learning perspective, which reveals that the pattern which humans perceive the expression is acknowledged by machines.


Developmental Psychobiology | 2014

Age-related changes in visual contour integration: Implications for physiology from psychophysics

Daniel Hipp; Kelly Dickerson; Alecia Moser; Peter Gerhardstein

Visual contour detection is enhanced by grouping principles, such as proximity and collinearity, which appear to rely on horizontal connectivity in visual cortex. Previous experiments suggest that children require greater proximity to detect contours and that, unlike adults, collinearity does not compensate for their proximity limitation. Over two experiments we test whether closure, a global property known to facilitate contour detection, compensates for this limitation. Adults and children (3-9 years old) performed a 2AFC task; one panel contained an illusory contour (closed or open) in visual noise, and one only noise. The experiments were identical except proximity was doubled in Exp. 2, enabling shorter-range spatial integration. Results suggest children are limited by proximity, and that closure did not reliably improve their performance as it did for adults. We conclude that perceptual maturity lags behind anatomy within this system, and suggest that slow statistical learning of long-range orientation correlations controls this disparity.


international conference on multimedia and expo | 2010

Expression-driven salient features: Bubble-based facial expression study by human and machine

Xing Zhang; Lijun Yin; Peter Gerhardstein; Daniel Hipp

Humans are able to recognize facial expressions of emotion from faces displaying a large set of confounding variables, including age, gender, ethnicity and other factors. Much work has been dedicated to attempts to characterize the process by which this highly developed capacity functions. In this paper, we propose to investigate local expression-driven features important to distinguishing facial expressions using a so-called ‘Bubbles’ technique [4]. The bubble technique is a kind of Gaussian masking to reveal information contributing to human perceptual categorization. We conducted experiments on factors from both human and machine. Observers are required to browse through the bubble-masked expression image and identify its category. By collecting responses from observers and analyzing them statistically we can find the facial features that humans employ for identifying different expressions. Humans appear to extract and use localized information specific to each expression for recognition. Additionally, we verify the findings by selecting the resulting features for expression classification using a conventional expression recognition algorithm with a public facial expression database.


Journal of Vision | 2015

Visual spatial uncertainty influences auditory change localization

Kelly Dickerson; Jeremy Gaston; Timothy Mermagen; Ashley Foots; Daniel Hipp

Human auditory localization ability is generally good for single sources, with errors subtending less than 5 degrees at the midline, and increasing to roughly 12 degrees in the periphery. Errors are generally larger in conditions where task demands are high, such as when multiple simultaneous sources are present. The current study shifts focus from the size of localization errors to the frequency of localization errors, examined under conditions of high and low visual uncertainty. In this study the ability to localize the addition of a new source to a mixture of 4 simultaneous sources was examined under two conditions of visual uncertainty. The task for participants was to listen to two auditory scenes separated by a brief blank interval. The scenes contained sounds edited to 1000ms duration that were representativeof outdoor urban spaces. Each first scene contained four sound sources; the second always contained a change, the addition of a new sound source. At the end of the second scene participants indicated where the change occurred, by pointing to the change location. Change locations were represented as either nine clearly visible and identifiable sound producing targets (speakers) in the free-field (low visual uncertainty condition) or change locations were obscured by occurring within an array of 90 visual targets (positioned 2 degrees apart), creating conditions of high visual spatial uncertainty. In general, performance was poorer when the audio-visual correspondence was reduced. Further, the pattern of results is consistent with other multisource localization studies; performance was most accurate when changes occur directly in front of the listener. Accuracy declined as changes were presented further in the periphery, and the impact of change position was greater under conditions of high visual spatial uncertainty (low audio-visual correspondence). Meeting abstract presented at VSS 2015.


affective computing and intelligent interaction | 2015

Perception driven 3D facial expression analysis based on reverse correlation and normal component

Xing Zhang; Zheng Zhang; Lijun Yin; Daniel Hipp; Peter Gerhardstein

Research on automated facial expression analysis (FEA) has been focused on applying different feature extraction methods on texture space and geometric space, using holistic or local facial regions based on regular grids or facial anatomical structure. Not much work has been investigated by taking human perception into account. In this paper, we propose to study the facial expressive regions using a reverse correlation method, and further develop a novel 3D local normal component feature representation based on human perceptions. The classification image (CI) accumulated in multiple trials reveals the shape features which alter the neutral Mona Lisa portrait to positive and negative domains. The differences can be identified by both humans and machine. Based on the CI and the derived local feature regions, a novel 3D normal component based feature (3D-NLBP) is proposed to represent positive and negative expressions (e.g., happiness and sadness). This approach achieves a good performance and has been validated by testing on both high-resolution database and real-time low resolution depth map videos.


Vision Research | 2012

The human visual system uses a global closure mechanism

Peter Gerhardstein; James Tse; Kelly Dickerson; Daniel Hipp; Alecia Moser


Journal of Vision | 2013

Using Reverse Correlation to let Adults and Children Show us their Emotional Expression Templates

Daniel Hipp; Alecia Moser; Xing Zhang; Lijun Yin; Peter Gerhardstein

Collaboration


Dive into the Daniel Hipp's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lijun Yin

Binghamton University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James Tse

Binghamton University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge