Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Cali M. Fidopiastis is active.

Publication


Featured researches published by Cali M. Fidopiastis.


Archive | 2005

Foundations of Augmented Cognition

Dylan D. Schmorrow; Cali M. Fidopiastis

This book constitutes the proceedings of the 9th International Conference on the Foundations of Augmented Cognition, AC 2015, held as part of the 17th International Conference on Human-Computer Interaction, HCII 2015, which took place in Los Angeles, CA, USA, in August 2015. HCII 2015 received a total of 4843 submissions, of which 1462 papers and 246 posters were accepted for publication after a careful reviewing process. These papers address the latest research and development efforts and highlight the human aspects of design and use of computing systems. The papers thoroughly cover the entire field of Human-Computer Interaction, addressing major advances in knowledge and effective use of computers in a variety of application areas. The 78 papers presented in the AC 2015 proceedings address the following major topics: cognitive performance and work load, BCI and operational neuroscience, cognition, perception and emotion measurement, adaptive and tutoring training, applications of augmented cognition


Perception | 2001

Recognising the style of spatially exaggerated tennis serves.

Frank E. Pollick; Cali M. Fidopiastis; Vic Braden

A technique for the construction of exaggerated human movements was developed and its effectiveness tested for the case of categorising tennis serves as flat, slice, or topspin. The technique involves treating movements as points in a high-dimensional space and uses average movements as the basis for constructing exaggerated movements. Exaggerated movements of a particular style are defined as those points in the space of movements which lie on a line originating at the style average and in the direction defined by the difference between the style average and the grand average. In order to visualise the movements, computer animation techniques were employed to transform the three-dimensional coordinates of the movement into the motion of a solid-body figure. These solid-body models were used in perceptual experiments to assess the effectiveness of the exaggeration technique. After an initial training session on the exemplars from the original library, subjects viewed the synthetic tennis-serve motions and in two separate sessions either made three-alternative, categorisation judgments after viewing a single serve or rated dissimilarity after viewing a pair of serves. Results from both accuracy in the categorisation task and structure of a multidimensional scaling solution of the matrix of dissimilarities indicated that, as distance from the grand average increased, the service motion became more distinct and more accurately identified.


Cyberpsychology, Behavior, and Social Networking | 2006

Human Experience Modeler: Context-Driven Cognitive Retraining to Facilitate Transfer of Learning

Cali M. Fidopiastis; Christopher B. Stapleton; Janet Whiteside; Charles E. Hughes; Stephen M. Fiore; Glenn A. Martin; Jannick P. Rolland; Eileen M. Smith

We describe a cognitive rehabilitation mixed-reality system that allows therapists to explore natural cuing, contextualization, and theoretical aspects of cognitive retraining, including transfer of training. The Human Experience Modeler (HEM) mixed-reality environment allows for a contextualized learning experience with the advantages of controlled stimuli, experience capture and feedback that would not be feasible in a traditional rehabilitation setting. A pilot study for testing the integrated components of the HEM is discussed where the participant presents with working memory impairments due to an aneurysm.


Proceedings of the AMI-ARCS 2004 Workshop | 2004

Physically-based Deformation of High-Resolution 3D Lung Models for Augmented Reality based Medical Visualization

Anand P. Santhanam; Cali M. Fidopiastis; Felix G. Hamza-Lup; Jannick P. Rolland; Celina Imielinska

Visualization tools using Augmented Reality Environments are effective in applications related to medical training, prognosis and expert interaction. Such medical visualization tools can also provide key visual insights on the physiology of deformable anatomical organs (e.g. lungs). In this paper we propose a deformation method that facilitates physically-based elastostatic deformations of 3D highresolution polygonal models. The implementation of the deformation method as a pre-computation approach is shown for a 3D high-resolution lung model. The deformation is represented as an integration of the applied force and the local elastic property assigned to the 3D lung model. The proposed deformation method shows faster convergence to equilibrium as compared to other physically-based simulation methods. The proposed method also accounts for the anisotropic tissue elastic properties. The transfer functions are formulated in such a way that they overcome stiffness effects during deformations.


PLOS ONE | 2010

Reentrant Processing in Intuitive Perception

Phan Luu; Alexandra Geyer; Cali M. Fidopiastis; Gwendolyn E. Campbell; Tracey Wheeler; Joseph Cohn; Don M. Tucker

The process of perception requires not only the brains receipt of sensory data but also the meaningful organization of that data in relation to the perceptual experience held in memory. Although it typically results in a conscious percept, the process of perception is not fully conscious. Research on the neural substrates of human visual perception has suggested that regions of limbic cortex, including the medial orbital frontal cortex (mOFC), may contribute to intuitive judgments about perceptual events, such as guessing whether an object might be present in a briefly presented fragmented drawing. Examining dense array measures of cortical electrical activity during a modified Waterloo Gestalt Closure Task, results show, as expected, that activity in medial orbital frontal electrical responses (about 250 ms) was associated with intuitive judgments. Activity in the right temporal-parietal-occipital (TPO) region was found to predict mOFC (∼150 ms) activity and, in turn, was subsequently influenced by the mOFC at a later time (∼300 ms). The initial perception of gist or meaning of a visual stimulus in limbic networks may thus yield reentrant input to the visual areas to influence continued development of the percept. Before perception is completed, the initial representation of gist may support intuitive judgments about the ongoing perceptual process.


Presence: Teleoperators & Virtual Environments | 2005

Methodology for the iterative evaluation of prototype head-mounted displays in virtual environments: visual acuity metrics

Cali M. Fidopiastis; Christopher A. Fuhrman; Catherine Meyer; Jannick P. Rolland

Head-mounted display design is an iterative process. As such, a standardized user-centered assessment protocol of head-mounted performance during each phase of prototype development should be employed. In this paper, we first describe a methodology for assessing prototype head-mounted displays and virtual environments using visual performance metrics. We then present an application of the methodology using a prototype of a projection head-mounted display and the first module of our assessment: resolution visual acuity as a function of contrast. To evaluate the total system, we also used three different light levels and two different types of projection materials. Results from both studies indicate that the visual acuity metric resolution accurately identified reductions in user visual acuity caused by parameters of the projection display and those of the phase conjugate material. Results further support the need for benchmark metrics that allow comparison of prototype head-mounted performance through each stage of design.


international conference on foundations of augmented cognition | 2007

An adaptive instructional architecture for training and education

Denise Nicholson; Cali M. Fidopiastis; Larry Davis; Dylan D. Schmorrow; Kay M. Stanney

Office of Naval Research (ONR) initiatives such as Human Performance Training and Education (HPT&E) as well as Virtual Technologies and Environments (VIRTE) have primarily focused on developing the strategies and technologies for creating multimodal reality or simulation based content. Resulting state-of-the-art training and education prototype simulators still rely heavily on instructors to interpret performance data, and adapt instruction via scenario generation, mitigations, feedback and after action review tools. Further research is required to fully close the loop and provide automated, adaptive instruction in these learning environments. To meet this goal, an ONR funded initiative focusing on the Training and Education arm of the HPT&E program will address the processes and components required to deliver these capabilities in the form of an Adaptive Instructional Architecture (AIA). An overview of the AIA as it applies to Marine Corps Warfighter training protocols is given as well as the theoretical foundations supporting it.


international conference on foundations of augmented cognition | 2009

Impact of Automation and Task Load on Unmanned System Operator's Eye Movement Patterns

Cali M. Fidopiastis; Julie M. Drexler; Daniel Barber; Keryl Cosenzo; Michael J. Barnes; Jessie Y. C. Chen; Denise Nicholson

Eye tracking under naturalistic viewing conditions may provide a means to assess operator workload in an unobtrusive manner. Specifically, we explore the use of a nearest neighbor index of workload calculated using eye fixation patterns obtained from operators navigating an unmanned ground vehicle under different task loads and levels of automation. Results showed that fixation patterns map to the operators experimental condition suggesting that systematic eye movements may characterize each task. Further, different methods of calculating the workload index are highly correlated, r(46) = .94, p = .01. While the eye movement workload index matches operator reports of workload based on the NASA TLX, the metric fails on some instances. Interestingly, these departure points may relate to the operators perceived attentional control score. We discuss these results in relation to automation triggers for unmanned systems.


Journal of The Optical Society of America A-optics Image Science and Vision | 2004

Albertian errors in head-mounted displays: I. Choice of eye-point location for a near- or far-field task visualization.

Jannick P. Rolland; Yonggang Ha; Cali M. Fidopiastis

A theoretical investigation of rendered depth and angular errors, or Albertian errors, linked to natural eye movements in binocular head-mounted displays (HMDs) is presented for three possible eye-point locations: the center of the entrance pupil, the nodal point, and the center of rotation of the eye. A numerical quantification was conducted for both the pupil and the center of rotation of the eye under the assumption that the user will operate solely in either the near field under an associated instrumentation setting or the far field under a different setting. Under these conditions, the eyes are taken to gaze in the plane of the stereoscopic images. Across conditions, results show that the center of the entrance pupil minimizes rendered angular errors, while the center of rotation minimizes rendered position errors. Significantly, this investigation quantifies that under proper setting of the HMD and correct choice of the eye points, rendered depth and angular errors can be brought to be either negligible or within specification of even the most stringent applications in performance of tasks in either the near field or the far field.


acm southeast regional conference | 2005

Distributed training system with high-resolution deformable virtual models

Felix G. Hamza-Lup; Anand P. Santhanam; Cali M. Fidopiastis; Jannick P. Rolland

Virtual environments (VEs) allow the development of promising tools in several application domains. In medical training, the learning potential of VE is significantly amplified by the capability of the tools to present 3D deformable models in real-time. This paper presents a distributed software architecture that allows visualization of a 3D deformable lungs model superimposed on a human patient simulator at several remote trainee locations. The paper presents the integration of deformable 3D anatomical models in a distributed software architecture targeted towards medical prognostics and training, as well as the assessment of the shared state consistency across multiple users. The results of the assessment prove that with delay compensation, the distributed interactive VE prototype achieves high levels of shared state consistency.

Collaboration


Dive into the Cali M. Fidopiastis's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Denise Nicholson

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eileen M. Smith

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Larry Davis

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Kay M. Stanney

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Aniket A. Vartak

University of Central Florida

View shared research outputs
Researchain Logo
Decentralizing Knowledge