Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Cristian Gadea is active.

Publication


Featured researches published by Cristian Gadea.


symposium on applied computational intelligence and informatics | 2011

An intelligent gesture interface for controlling TV sets and set-top boxes

Dan Ionescu; Bogdan Ionescu; Cristian Gadea; Shahidul Islam

The control of computers and electronics through hand gestures has gained significant industry and academic attention lately for the usability benefits and convenience that it offers users. Of particular research interest has been the control of living room environments containing televisions and set-top boxes. However, existing research has failed to provide a flexible solution for controlling such devices by hand gestures. They have used cameras that are sensitive to environmental factors such as lighting or that have unreasonable calibration demands. Additionally, the gesture processing techniques used so far have imposed considerable computational burden and have not provided a consistent and compelling TV control experience for a large variety of users and their homes. In this paper, the data returned from a custom 3D depth camera and a customizable gesture language is used to create an intelligent gesture interface for the control of TVs and set-top boxes. By using an infrared blaster to emit the commands typical of a physical remote, any television set or set-top box can be controlled to perform actions such as turning the TV on, changing the volume, muting the sound or changing the channel. Finally, a test setup is presented where a common television and a satellite receiver are controlled exclusively through hand gestures.


international conference on computer communications and networks | 2011

A Collaborative Cloud-Based Multimedia Sharing Platform for Social Networking Environments

Cristian Gadea; Bogdan Solomon; Bogdan Ionescu; Dan Ionescu

The amount of multimedia content on the internet has been growing at a remarkable rate, and users are increasingly looking to share online media with colleagues and friends on social networks. Several commercial and academic solutions have attempted to make it easier to share this large variety of online content with others, but they are generally limited to sending links. Existing products have not been able to provide a scalable cloud-based system that synchronizes disparate web content among many users in real-time. Additionally, they have lacked a platform with a modular architecture that can be extended by developers to support new sources of online media. In this paper, a cloud-based software architecture for a multimedia collaboration platform is introduced. The platform is accessible from a typical web browser and allows users to collaborate over webcam chat while viewing videos, photos, maps, documents, and listening to music, all in real-time. As examples, it is shown how a distributed system called Watch Together was deployed to real users within Facebook and an e-learning environment. Usage data is provided from both deployments and observations are made on how users share and consume real-time multimedia content.


international conference on computer communications and networks | 2011

A Multimodal Interaction Method that Combines Gestures and Physical Game Controllers

Dan Ionescu; Bogdan Ionescu; Cristian Gadea; Shahidul Islam

Motion-based control of video games has gained significant attention from both academic and industrial research groups for the unique interactive experiences it offers. Of particular research interest has been the control of games through gesture-based interfaces enabled by 3D cameras that have recently been made affordable. However, existing research has yet to combine the benefits of a 3D camera with those of a physical game controller in a way that uses accurate gesture and controller tracking to provide six degrees of freedom and one-to-one correspondence between the real-world 3D space and the virtual environment. This paper presents a natural man-machine interaction method whereby a user is able to control a virtual space by using one hand to perform gestures and the other hand to wield a physical controller. The data returned from a custom 3D depth camera is used to obtain not only hand gestures (number of fingers and their angles), but also the absolute position of the physical controller. This 3D data is then combined with the orientation data returned by the accelerometers and gyroscopes within the physical controller. The controller data is fused in real-time into a composite transformation matrix that is applied to a 3D object. Two game prototypes are presented that combine hand gestures and a physical controller to create an entirely new level of interactive gaming.


distributed simulation and real-time applications | 2008

A SIP Based P2P Architecture for Social Networking Multimedia

Rabih Dagher; Cristian Gadea; Bogdan Ionescu; Dan Ionescu; Robin Tropper

P2P applications have been seen as one of the most elegant and simple Web applications which stirred a lot of controversy as they were a vehicle for many Web activities which were and are not legal. The P2P technology has been applied to many domains, from applications spanning the space of music downloading to communication system management etc. The grid as a middleware has raised P2P technology to the level of distributed computing. A series of P2P tools have been devised for the design, development and deployment of network based computing. Despite their spread, there are many open issues such as the lack of any centralized control or hierarchical organization. In this paper, a new hybrid architecture in which a P2P solution based on SIP a server to server connectivity on demand. The above serves as the central axis of a platform for media collaboration on the cloud. It is shown that using the new collaborative workspace on the cloud architecture different social networks in can be linked in a peer-to-peer manner. The security issues do not make the subject of this paper. A proof of concept, a use case scenario, and results obtained from its usage in a test environment are given at the end of the paper.


symposium on applied computational intelligence and informatics | 2012

Finger-based gesture control of a collaborative online workspace

Cristian Gadea; Bogdan Ionescu; Dan Ionescu; Shahidul Islam; Bogdan Solomon

A gesture-based human computer interface can make computers and devices easier to use, such as by allowing people to share photos by moving their hands through the air. Existing solutions have relied on exotic hardware, often involving elaborate setups limited to the research lab. Gesture recognition algorithms used so far are not practical or responsive enough for real-world use, partially due to the inadequate data on which the image processing is applied. Most importantly, existing solutions have lacked a workspace that allows users to perform common collaborative tasks by using their hands and fingers. In this paper, a new paradigm for next-generation computer interfaces is introduced. The method presented is based on a custom 3D camera that is easy to set up and has a flexible detection range. This method accurately detects hand gestures from depth data, allowing them to be used to control any application or device. The paper proposes the control of application windows and their content in collaborative online workspaces on which many teams cooperate to complete useful tasks, as shown with examples.


symposium on 3d user interfaces | 2016

Full-body tracking using a sensor array system and laser-based sweeps

Shahidul Islam; Bogdan Ionescu; Cristian Gadea; Dan Ionescu

The increased availability of consumer-grade virtual reality (VR) head-mounted displays (HMD) has created significant demand for affordable and reliable 3D input devices that can be used to control 3D user interfaces. Accurate positioning of a users body within the virtual environment is essential in order to provide users with convincing and interactive VR experiences. Existing full-body motion tracking systems from academia and industry have suffered from problems of occlusion and accumulated sensor error while often lacking absolute positional tracking. This paper describes a wireless Sensor Array System that uses multiple inertial measurement units (IMUs) for calculating the complete pose of a users body. The system corrects gyroscope errors by using magnetic sensor data. The Sensor Array System is augmented by a positional tracking system that consists of a rotary-laser base station and a photodiode-based tracked object worn on the users torso. The base station emits horizontal and vertical laser lines that sweep across the environment in sequence. With the known configuration of the photodiode constellation, the position and orientation of the tracked object can be determined with high accuracy, low latency, and low computational overhead. As will be shown, the sensor fusion algorithms used result with a full-body tracking system that can be applied to a wide variety of 3D applications and interfaces.


IEEE Latin America Transactions | 2014

Using a NIR Camera for Car Gesture Control

Bogdan Ionescu; Viorel Suse; Cristian Gadea; Bogdan Solomon; Daniela Ionescu; Shariful Islam; Marius Cordea

As digital components are increasingly present in the control of automotive engines, direction systems and other in-car devices, Human-Vehicle Interaction (HVI) becomes more and more complex, requiring new user interfaces. Gesture control is proposed in the literature as a techniques which deserves to be explored as it can tremendously simplify numerous interactions between the car and the driver and/or other passengers. Key characteristics of such HVI devices include reliability, robustness, and stability of the entire system, ranging from the acquisition of the gesture to its recognition and tracking in real-time. In this paper, a smart and real-time depth camera operating in the Near Infrared (NIR) Spectrum is introduced. The camera is based on a new depth generation principle of sampling the space of the Field-of-View (FOV) with IR pulses of variable frequency and duty cycle. The depth images are calculated using reconfigurable hardware architecture and a series of eight IR images obtained via a sensitive image sensor. The final depth map is then processed by the gesture detection, recognition and tracking algorithms. A series of gestures are explored to qualify them for the special case of car control.


collaboration technologies and systems | 2012

Distributed clouds for collaborative applications

Bogdan Solomon; Dan Ionescu; Cristian Gadea; Stejarel Veres; Marin Litoiu; Joanna Ng

With the advent of social networking and the appearance of Web 2.0, collaborative applications which allow users to share data online, often in real-time, have gained increasing prominence. Whether for sharing images, sharing videos, or even sharing live gaming sessions, such applications must deal with session sizes from tens to tens of thousands of people. However, existing products have not been able to provide a scalable cloud-based system that synchronizes disparate web content among many users. Such a goal is desired in order to provide the benefits of cloud deployments to collaborative applications. Many such applications cannot predict the number of connections which they may need to handle. As such, applications must either provision a higher number of servers in anticipation of more traffic, or be faced with a degradation of the user experience when a large number of clients connect to the application. Cloud-based deployments can alleviate these issues by allowing the applications server base to scale automatically with user demand. A cloud deployment can also distribute servers throughout different geographic locations in order to offer improved latency and response times to its clients. This paper will present an architecture for a distributed, collaborative, and server-based application. The application is deployed inside a distributed cloud environment, which consists of multiple clouds in various geographic locations.


international joint conference on computational cybernetics and technical informatics | 2010

A new method for 3D object reconstruction in real-time

Dan Ionescu; Bodgan Ionescu; Shahidul Islam; Cristian Gadea

A novel real-time depth-mapping principle and camera where pulsed laser light is combined with a gain-modulated camera and a phase-locked loop control of laser intensity is described in this paper. The depth resolution is variable depending on the resolution of the camera and of the gating possibilities of the sensor. A sensor of 1 Mpixel is used providing a resolution of 1024×1024 which can be gated with very high speeds up to a few ns. Front images of real objects are reconstructed in 3D views based on the data provided by the laser imaging technique and on a new image processing algorithm, in real-time. The new method based on pulsed laser diodes is applicable to various types of image sensors as required by the application domain. As such the camera can be used for gaming, for controlling through gestures various computer applications spanning from digital signage to for example unmanned vehicles. Results are provided for a low end camera used in gaming. A new human computer interface based on gesture control is described. A series of experiments in which the camera is used to capture human gestures which are interpreted and recognized by various image processing algorithms are given.


virtual environments, human-computer interfaces and measurement systems | 2010

Using depth measuring cameras for a new human computer interaction in augmented virtual reality environments

Dan Ionescu; Bogdan Ionescu; Shahidul Islam; Cristian Gadea; Eric McQuiggan

The usage of a novel real-time depth-mapping principle, and of a 3D camera which embodies the new depth-mapping principle to control a number of computer applications ranging from games to collaborative multimedia environments, is described in this paper. The 3D camera has a variable depth resolution obtained from images of 1024×1024 pixels. By using the depth data provided by the 3D camera, a persons body parts and their movements are analyzed and reconstructed in real-time. Their features and spatial positions are determined and corresponding actions are triggered. Triggered actions are used to control computer games, digital signage, GIS applications, unmanned vehicles, and consumer electronics such as TVs, set-top boxes and PDAs. In this paper, the use of a 3D camera in a new human computer interface for augmented virtual reality is given and illustrated in a series of images captured from live experiments.

Collaboration


Dive into the Cristian Gadea's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge