Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lars Christian Jensen is active.

Publication


Featured researches published by Lars Christian Jensen.


reconfigurable computing and fpgas | 2008

A Real-Time Embedded System for Stereo Vision Preprocessing Using an FPGA

Anders Kjær-Nielsen; Lars Christian Jensen; Anders Stengaard Sørensen; Norbert Krüger

In this paper a low level vision processing node for use in existing IEEE 1394 camera setups is presented. The processing node is a small embedded system, that utilizes an FPGA to perform stereo vision preprocessing at rates limited by the bandwidth of IEEE 1394a (400 Mbit). The system is used in a hybrid architecture [5], where it produces undistorted and rectified 512 x 512 images at 2 x 15 frames pr. second (fps) as either a downsampled version or a region of interest (ROI) of the high resolution camera output. Three processes are performed: Bayer demosaicing, downsampling and region of interest extraction, and undistortion and rectification. The latency of the system when running at 2 x 15 fps is 30 ms.


international conference on social robotics | 2014

To Beep or Not to Beep Is Not the Whole Question

Kerstin Fischer; Lars Christian Jensen; Leon Bodenhagen

In this paper, we address social effects of different mechanisms by means of which a robot can signal a person that it wants to pass. In the situation investigated, the robot attempts to pass by a busy, naive participant who is blocking the way for the robot. The robot is a relatively large service robot, the Care-o-bot. Since speech melody has been found to fulfill social functions in human interactions, we investigate whether there is a difference in perceived politeness of the robot if the robot uses a beep sequence with rising versus with falling intonation, in comparison with no acoustic signal at all. The results of the experimental study (n=49) shows that approaching the person with a beep makes people more comfortable than without any sound, and that rising intonation contours make people feel more at ease than falling contours, especially women, who rate the robot that uses rising intonation contours as friendlier and warmer. The exact form of robot output thus matters.


designing interactive systems | 2014

Human actions made tangible: analysing the temporal organization of activities

Jacob Buur; Agnese Caglio; Lars Christian Jensen

With designers increasingly moving beyond button pushing and flat-screen interaction towards tangible and embodied interaction, techniques for user studies need to develop as well. While ethnographic video studies and ethnomethod-ological analyses are becoming standard in many interaction design projects, it remains a challenge to investigate in detail how people interact with all of their body. Analysis of full-body movement is time consuming, notation techniques are rare, and findings are difficult to share between members of a design team. In this paper we propose tangible video analysis, a method developed to engage people from different backgrounds in collaboratively analysing videos with the help of physical objects. We will present one of these tools, Action Scrabble, for analysing temporal organisation of human actions. We work with a case of skilled forklift truck driving. By backtracking our design research experiments, we will unfold how and why the tangible tool succeeds in engaging designers with varied analysis experience to collaboratively focus on human action structures -- and even find video analysis fun!


international conference on social robotics | 2015

The Effects of Social Gaze in Human-Robot Collaborative Assembly

Kerstin Fischer; Lars Christian Jensen; Franziska Kirstein; Sebastian Stabinger; Özgür Erkent; Dadhichi Shukla; Justus H. Piater

In this paper we explore how social gaze in an assembly robot affects how naive users interact with it. In a controlled experimental study, 30 participants instructed an industrial robot to fetch parts needed to assemble a wooden toolbox. Participants either interacted with a robot employing a simple gaze following the movements of its own arm, or with a robot that follows its own movements during tasks, but which also gazes at the participant between instructions. Our qualitative and quantitative analyses show that people in the social gaze condition are significantly more quick to engage the robot, smile significantly more often, and can better account for where the robot is looking. In addition, we find people in the social gaze condition to feel more responsible for the task performance. We conclude that social gaze in assembly scenarios fulfills floor management functions and provides an indicator for the robot’s affordance, yet that it does not influence likability, mutual interest and suspected competence of the robot.


human robot interaction | 2016

A Comparison of Types of Robot Control for Programming by Demonstration

Kerstin Fischer; Franziska Kirstein; Lars Christian Jensen; Norbert Krüger; Kamil Kukliński; Maria Vanessa aus der Wieschen; Thiusius Rajeeth Savarimuthu

Programming by Demonstration (PbD) is an efficient way for non-experts to teach new skills to a robot. PbD can be carried out in different ways, for instance, by kinesthetic guidance, teleoperation or by using external controls. In this paper, we compare these three ways of controlling a robot in terms of efficiency, effectiveness (success and error rate) and usability. In an industrial assembly scenario, 51 participants carried out peg-in-hole tasks using one of the three control modalities. The results show that kinesthetic guidance produces the best results. In order to test whether the problems during teleoperation are due to the fact that users cannot, like in kinesthetic guidance, switch between control points using traditional teleoperation devices, we designed a new device that allows users to switch between controls for large and small movements. A user study with 15 participants shows that the novel teleoperation device yields almost as good results as kinesthetic guidance.


robot and human interactive communication | 2017

Timing of multimodal robot behaviors during human-robot collaboration

Lars Christian Jensen; Kerstin Fischer; Stefan-Daniel Suvei; Leon Bodenhagen

In this paper, we address issues of timing between robot behaviors in multimodal human-robot interaction. In particular, we study what effects sequential order and simultaneity of robot arm and body movement and verbal behavior have on the fluency of interactions. In a study with the Care-O-bot, a large service robot, in a medical measurement scenario, we compare the timing of the robots behaviors in three between-subject conditions. The results show that the relative timing of robot behaviors has significant effects on the number of problems participants encounter, and that the robots verbal output plays a special role because participants carry their expectations from human verbal interaction into the interactions with robots.


robot and human interactive communication | 2016

Between legibility and contact: The role of gaze in robot approach

Kerstin Fischer; Lars Christian Jensen; Stefan-Daniel Suvei; Leon Bodenhagen

In this paper, we explore experimentally the possible tradeoff between gaze to the user and gaze to the path in robot approach. While some previous work indicates that gaze towards the user increases perceived safety because the user feels recognized, other work indicates that it is legibility of the robots actions that put users at ease. If the robot does not drive up to the person in a straight line directly, the robot can either continuously look at the person and thus maintain eye contact, or indicate its path through its gaze behavior, increasing legibility. In an experiment with N=36 participants, we tested the tradeoff between legibility and eye contact. The behavioral results show that users are significantly more at ease with the robot that gazes at them than with the robot that looks where it is going, measured by the number of instances of glances away from the robot. Likewise, the participants rate the robot that looks at them continuously as more intelligent and more cooperative. Thus, participants value mutual gaze higher than legibility.


human robot interaction | 2016

Maintaining Trust While Fixated to a Rehabilitative Robot

Laura U. Jensen; Trine Straarup Winther; Rasmus Nyholm Jørgensen; Didde Marie Hellestrup; Lars Christian Jensen

This paper investigates the trust relationship between humans and a rehabilitation robot, the RoboTrainer. We present a study in which participants let the robot guide their arms through a series of preset coordinates in a 3D space. Each participant interact with the robot twice, one time where participants hold on to the robotic arm, and a second time where participants are fixated to the robotic arm. Our findings show that in general participants did not feel more insecure when fixated to the robot. However, when the robot arm moves close to participants and enter their intimate space, or when the robot moves out into an outer position participants display significantly more signs of fear opposed to when the robot arm is in a normal position.


human robot interaction | 2016

Eliciting Conversation in Robot Vehicle Interactions

David Sirkin; Kerstin Fischer; Lars Christian Jensen; Wendy Ju


human robot interaction | 2015

Negotiating Instruction Strategies during Robot Action Demonstration

Lars Christian Jensen; Kerstin Fischer; Dadhichi Shukla; Justus H. Piater

Collaboration


Dive into the Lars Christian Jensen's collaboration.

Top Co-Authors

Avatar

Kerstin Fischer

University of Southern Denmark

View shared research outputs
Top Co-Authors

Avatar

Franziska Kirstein

University of Southern Denmark

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Norbert Krüger

University of Southern Denmark

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kamil Kukliński

Bialystok University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge