Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bill Kapralos is active.

Publication


Featured researches published by Bill Kapralos.


Advances in Human-computer Interaction | 2013

Assessment in and of serious games: an overview

Francesco Bellotti; Bill Kapralos; Kiju Lee; Pablo Moreno-Ger; Riccardo Berta

There is a consensus that serious games have a significant potential as a tool for instruction. However, their effectiveness in terms of learning outcomes is still understudied mainly due to the complexity involved in assessing intangible measures. A systematic approach--based on established principles and guidelines--is necessary to enhance the design of serious games, and many studies lack a rigorous assessment. An important aspect in the evaluation of serious games, like other educational tools, is user performance assessment. This is an important area of exploration because serious games are intended to evaluate the learning progress as well as the outcomes. This also emphasizes the importance of providing appropriate feedback to the player. Moreover, performance assessment enables adaptivity and personalization to meet individual needs in various aspects, such as learning styles, information provision rates, feedback, and so forth. This paper first reviews related literature regarding the educational effectiveness of serious games. It then discusses how to assess the learning impact of serious games and methods for competence and skill assessment. Finally, it suggests two major directions for future research: characterization of the players activity and better integration of assessment in games.


International Journal of Imaging Systems and Technology | 2003

Audiovisual localization of multiple speakers in a video teleconferencing setting

Bill Kapralos; Michael Jenkin; Evangelos E. Milios

Attending to multiple speakers in a video teleconferencing setting is a complex task. From a visual point of view, multiple speakers can occur at different locations and present radically different appearances. From an audio point of view, multiple speakers may be speaking at the same time, and background noise may make it difficult to localize sound sources without some a priori estimate of the sound source locations. This article presents a novel sensor and corresponding sensing algorithms to address the task of attending, simultaneously, to multiple speakers for video teleconferencing. A panoramic visual sensor is used to capture a 360° view of the speakers in the environment and from this view potential speakers are identified via a color histogram approach. A directional audio system based on beamforming is then used to confirm potential speakers and attend to them. Experimental evaluation of the sensor and its algorithms are presented including sample performance of the entire system in a teleconferencing setting.


new technologies, mobility and security | 2008

Biometric Identification System Based on Electrocardiogram Data

Youssef Gahi; Meryem Lamrani; Abdelhak Zoglat; Mouhcine Guennoun; Bill Kapralos; Khalil El-Khatib

Recent advancements in computing and digital signal processing technologies have made automated identification of people based on their biological, physiological, or behavioral traits a feasible approach for access control. The wide variety of available technologies has also increased the number of traits and features that can be collected and used to more accurately identify people. Systems that use biological, physiological, or behavioral trait to grant access to resources are called biometric systems. In this paper we present a biometric identification system based on the Electrocardiogram (ECG) signal. The system extracts 24 temporal and amplitude features from an ECG signal and after processing, reduces the set of features to the nine most relevant features. Preliminary experimental results indicate that the system is accurate and robust and can achieve a 100% identification rate with the reduced set of features.


Presence: Teleoperators & Virtual Environments | 2008

Virtual audio systems

Bill Kapralos; Michael Jenkin; Evangelos E. Milios

To be immersed in a virtual environment, the user must be presented with plausible sensory input including auditory cues. A virtual (three-dimensional) audio display aims to allow the user to perceive the position of a sound source at an arbitrary position in three-dimensional space despite the fact that the generated sound may be emanating from a fixed number of loudspeakers at fixed positions in space or a pair of headphones. The foundation of virtual audio rests on the development of technology to present auditory signals to the listeners ears so that these signals are perceptually equivalent to those the listener would receive in the environment being simulated. This paper reviews the human perceptual and technical literature relevant to the modeling and generation of accurate audio displays for virtual environments. Approaches to acoustical environment simulation are summarized and the advantages and disadvantages of the various approaches are presented.


Advances in Human-computer Interaction | 2013

User Assessment in Serious Games and Technology-Enhanced Learning

Francesco Bellotti; Bill Kapralos; Kiju Lee; Pablo Moreno-Ger

1 Department of Naval, Electric, Electronic and Telecommunications Engineering, University of Genoa, Via all’Opera Pia 11/a, 16145 Genoa, Italy 2 Faculty of Business and Information Technology, University of Ontario Institute of Technology, 2000 Simcoe Street North, Oshawa, Canada L1H 7K4 3Department of Mechanical and Aerospace Engineering, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH 44106, USA 4 Faculty of Computer Science, Universidad Complutense de Madrid, Ciudad Universitaria Universidad Complutense de Madrid, 28040 Madrid, Spain


Archive | 2011

Entertainment Computing – ICEC 2011

Junia Coutinho Anacleto; Sidney S. Fels; Nicholas Graham; Bill Kapralos; Magy Saif El-Nasr; Kevin Stanley

This book constitutes the refereed proceedings of the 10th International Conference on Entertainment Computing, ICEC 2011, held in Vancouver, Canada, in October 2011, under the auspices of IFIP. The 20 revised long papers, 18 short papers and 24 poster papers and demos presented were carefully reviewed and selected from 94 initial submissions. The papers cover all main domains of entertainment computing, from interactive music to games, taking a wide range of scientific domains from aesthetic to computer science. The papers are organized in topical sections on story, active games, player experience, camera and 3D, educational entertainment, game development, self and identity, social and mobile entertainment; plus the four categories: demonstrations, posters, workshop, and tutorial.


Archive | 2014

Healthcare Training Enhancement Through Virtual Reality and Serious Games

Sandrine de Ribaupierre; Bill Kapralos; Faizal A. Haji; Eleni Stroulia; Adam Dubrowski; Roy Eagleson

There has been an increase in the use of immersive 3D virtual environments and serious games, that is, video games that are used for educational purposes, and only recently serious games have been considered for healthcare training. For example, there are a number of commercial surgical simulators which offer great potential for the training of basic skills and techniques, if the tedium of repeated rehearsal can be overcome. It is generally recognized that more abstract problem-solving and knowledge level training needs to be incorporated into simulated scenarios. This chapter explores some examples of what has been developed in terms of teaching models and evaluative methodologies, then discusses the educational theories explaining why virtual simulations and serious games are an important teaching tool, and finally suggests how to assess their value within an educational context. The tasks being trained span several levels of abstraction, from kinematic and dynamic aspects to domain knowledge training. The evaluation of the trainee at each level of this hierarchy necessitates objective metrics. We will describe a unifying framework for evaluation of speed and accuracy of these multi-level tasks needed for validating their effectiveness before inclusion in medical training curricula. In addition, specific case studies will be presented and research results brought forward regarding the development of virtual simulations, including those for neurosurgical procedures, EMS training, and patient teaching modules.


Proceedings of the First International Conference on Gameful Design, Research, and Applications | 2013

The missing piece in the gamification puzzle

David Rojas; Bill Kapralos; Adam Dubrowski

Gamification, that is, employing game design elements to non-gaming applications to make them more fun, engaging, and motivating, has been growing in popularity and is seen in a large number of contexts. In this paper we present a framework that seeks to provide investigators with guidelines for the implementation of gamification. The proposed framework is an adaptation of a framework proposed by the Medical Research Council in 2000, and has been extensively applied to research in health services, public health, and social policy related to health. The use of this framework within the gamification field may help make gamification a more controlled intervention that can be documented, and evaluated, with replicated outcomes amongst differing contexts.


conference on future play | 2008

Spatial sound for video games and virtual environments utilizing real-time GPU-based convolution

Brent Cowan; Bill Kapralos

The generation of spatial audio is computationally very demanding and therefore, accurate spatial audio is typically overlooked in games and virtual environments applications thus leading to a decrease in both performance and the users sense of presence or immersion. Driven by the gaming industry and the great emphasis placed on the visual sense, consumer computer graphics hardware (and the graphics processing unit in particular), has greatly advanced in recent years, even outperforming the computational capacity of CPUs. This has allowed for real-time, interactive realistic graphics-based applications on typical consumer-level PCs. Despite the many similarities between the fields of spatial audio and computer graphics, computer graphics and image synthesis in particular, has advanced far beyond spatial audio given the emphasis placed on the generation of believable visual cues over other perceptual cues including auditory. Given the widespread use and availability of computer graphics hardware as well as the similarities that exist between the fields of spatial audio and image synthesis, this work investigates the application of graphics processing units for the computationally efficient generation of spatial audio for dynamic and interactive games and virtual environments. Here we present a real-time GPU-based convolution method and illustrate its superior efficiency to conventional, software-based, time-domain convolution.


IEEE Transactions on Neural Systems and Rehabilitation Engineering | 2003

Sonification of range information for 3-D space perception

Evangelos E. Milios; Bill Kapralos; Agnieszka Kopinska; Sotirios Stergiopoulos

We present a device that allows three-dimensional (3-D) space perception by sonification of range information obtained via a point laser range sensor. The laser range sensor is worn by a blindfolded user, who scans space by pointing the laser beam in different directions. The resulting stream of range measurements is then converted to an auditory signal whose frequency or amplitude varies with the range. Our device differs from existing navigation aids for the visually impaired. Such devices use sonar ranging whose primary purpose is to detect obstacles for navigation, a task to which sonar is well suited due to its wide beam width. In contrast, the purpose of our device is to allow users to perceive the details of 3-D space that surrounds them, a task to which sonar is ill suited, due to artifacts generated by multiple reflections and due to its limited range. Preliminary trials demonstrate that the user is able to easily and accurately detect corners and depth discontinuities and to perceive the size of the surrounding space.

Collaboration


Dive into the Bill Kapralos's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brent Cowan

University of Ontario Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alvaro Uribe-Quevedo

Military University Nueva Granada

View shared research outputs
Top Co-Authors

Avatar

Miguel Vargas Martin

University of Ontario Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge