Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tevfik Metin Sezgin is active.

Publication


Featured researches published by Tevfik Metin Sezgin.


intelligent user interfaces | 2005

HMM-based efficient sketch recognition

Tevfik Metin Sezgin; Randall Davis

Current sketch recognition systems treat sketches as images or a collection of strokes, rather than viewing sketching as an interactive and incremental process. We show how viewing sketching as an interactive process allows us to recognize sketches using Hidden Markov Models. We report results of a user study indicating that in certain domains people draw objects using consistent stroke orderings. We show how this consistency, when present, can be used to perform sketch recognition efficiently. This novel approach enables us to have polynomial time algorithms for sketch recognition and segmentation, unlike conventional methods with exponential complexity.


IEEE Computer Graphics and Applications | 2007

Sketch Interpretation Using Multiscale Models of Temporal Patterns

Tevfik Metin Sezgin; Randall Davis

Sketching is a natural input modality that has received increased interest in the computer graphics and human-computer interaction communities. The emergence of hardware such as tablet PCs and handheld PDAs provides easy means for capturing pen input. These devices combine a display, pen tracker, and computing device, making it possible to capture and process sketches online, as they are drawn. In this article, we present our sketch-recognition framework, which uses data to automatically learn the object orderings that commonly occur when people sketch and then use the orderings for sketch recognition. The key features that make this framework novel include learning object-level patterns from data, handling objects comprising multiple strokes (multistroke objects) and objects that share strokes (multiobject strokes), and supporting continuous observable features. We also present an efficient graphical model implementation of our approach and report that a specialized inference algorithm known as the Lauritzen-Jensen stable conditional Gaussian belief propagation should be used to avoid numerical instabilities in recognition


ieee haptics symposium | 2010

Haptic negotiation and role exchange for collaboration in virtual environments

S. Ozgur Oguz; Ayse Kucukyilmaz; Tevfik Metin Sezgin; Cagatay Basdogan

We investigate how collaborative guidance can be realized in multi-modal virtual environments for dynamic tasks involving motor control. Haptic guidance in our context can be defined as any form of force/tactile feedback that the computer generates to help a user execute a task in a faster, more accurate, and subjectively more pleasing fashion. In particular, we are interested in determining guidance mechanisms that best facilitate task performance and arouse a natural sense of collaboration. We suggest that a haptic guidance system can be further improved if it is supplemented with a role exchange mechanism, which allows the computer to adjust the forces it applies to the user in response to his/her actions. Recent work on collaboration and role exchange presented new perspectives on defining roles and interaction. However existing approaches mainly focus on relatively basic environments where the state of the system can be defined with a few parameters. We designed and implemented a complex and highly dynamic multimodal game for testing our interaction model. Since the state space of our application is complex, role exchange needs to be implemented carefully. We defined a novel negotiation process, which facilitates dynamic communication between the user and the computer, and realizes the exchange of roles using a three-state finite state machine. Our preliminary results indicate that even though the negotiation and role exchange mechanism we adopted does not improve performance in every evaluation criteria, it introduces a more personal and human-like interaction model.


Computers & Graphics | 2008

Sketch recognition in interspersed drawings using time-based graphical models

Tevfik Metin Sezgin; Randall Davis

Sketching is a natural mode of interaction used in a variety of settings. With the increasing availability of pen-based computers, sketch recognition has gained attention as an enabling technology for natural pen-based interfaces. Previous work in sketch recognition has shown that in certain domains the stroke orderings used when drawing objects contain temporal patterns that can aid recognition. So far, systems that use temporal information for recognition have assumed that objects are drawn one at a time. This paper shows how this assumption can be relaxed to permit temporal interspersing of strokes from different objects. We describe a statistical framework based on dynamic Bayesian networks that explicitly models the fact that objects can be drawn interspersed. We present recognition results for hand-drawn electronic circuit diagrams, showing that handling interspersed drawing provides a significant increase in accuracy.


Pattern Recognition | 2011

Sketch recognition by fusion of temporal and image-based features

Relja Arandjelović; Tevfik Metin Sezgin

The increasing availability of pen-based hardware has recently resulted in a parallel growth in sketch-based user interfaces. Sketch-based user interfaces aim to combine the expressive power of free-hand sketching with the processing power of computers. Most sketch-based systems require intelligent ink processing capabilities, which makes the development of robust sketch recognition algorithms a primary concern in the field. So far, the research in sketch recognition has produced various independent approaches to recognition, each of which uses a particular kind of information (e.g., geometric and spatial constraints, image-based features, temporal stroke-ordering patterns). These methods were designed in isolation as stand-alone algorithms, and there has been little work treating various recognition methods as alternative sources of information that can be combined to increase sketch recognition accuracy. In this paper, we focus on two such methods and fuse an image-based method with a time-based method in an attempt to combine the knowledge of how objects look (image data) with the knowledge of how they are drawn (temporal data). In the course of combining spatial and temporal information, we also introduce a mathematically well founded fusion method for combining recognizers. Our combination method can be used for isolated sketch recognition as well as full diagram recognition. Our evaluation with two databases shows that fusing image-based and temporal features yields higher recognition rates. These results are the first to confirm the complementary nature of image-based and temporal recognition methods for full sketch recognition, which has long been suggested, but never supported by data.


IEEE Transactions on Haptics | 2013

Intention Recognition for Dynamic Role Exchange in Haptic Collaboration

Ayse Kucukyilmaz; Tevfik Metin Sezgin; Cagatay Basdogan

In human-computer collaboration involving haptics, a key issue that remains to be solved is to establish an intuitive communication between the partners. Even though computers are widely used to aid human operators in teleoperation, guidance, and training, because they lack the adaptability, versatility, and awareness of a human, their ability to improve efficiency and effectiveness in dynamic tasks is limited. We suggest that the communication between a human and a computer can be improved if it involves a decision-making process in which the computer is programmed to infer the intentions of the human operator and dynamically adjust the control levels of the interacting parties to facilitate a more intuitive interaction setup. In this paper, we investigate the utility of such a dynamic role exchange mechanism, where partners negotiate through the haptic channel to trade their control levels on a collaborative task. We examine the energy consumption, the work done on the manipulated object, and the joint efficiency in addition to the task performance. We show that when compared to an equal control condition, a role exchange mechanism improves task performance and the joint efficiency of the partners. We also show that augmenting the system with additional informative visual and vibrotactile cues, which are used to display the state of interaction, allows the users to become aware of the underlying role exchange mechanism and utilize it in favor of the task. These cues also improve the users sense of interaction and reinforce his/her belief that the computer aids with the execution of the task.


IEEE Transactions on Haptics | 2015

Recognition of Haptic Interaction Patterns in Dyadic Joint Object Manipulation

Cigil Ece Madan; Ayse Kucukyilmaz; Tevfik Metin Sezgin; Cagatay Basdogan

The development of robots that can physically cooperate with humans has attained interest in the last decades. Obviously, this effort requires a deep understanding of the intrinsic properties of interaction. Up to now, many researchers have focused on inferring human intents in terms of intermediate or terminal goals in physical tasks. On the other hand, working side by side with people, an autonomous robot additionally needs to come up with in-depth information about underlying haptic interaction patterns that are typically encountered during human-human cooperation. However, to our knowledge, no study has yet focused on characterizing such detailed information. In this sense, this work is pioneering as an effort to gain deeper understanding of interaction patterns involving two or more humans in a physical task. We present a labeled human-human-interaction dataset, which captures the interaction of two humans, who collaboratively transport an object in an haptics-enabled virtual environment. In the light of information gained by studying this dataset, we propose that the actions of cooperating partners can be examined under three interaction types: In any cooperative task, the interacting humans either 1) work in harmony, 2) cope with conflicts, or 3) remain passive during interaction. In line with this conception, we present a taxonomy of human interaction patterns; then propose five different feature sets, comprising force-, velocity-and power-related information, for the classification of these patterns. Our evaluation shows that using a multi-class support vector machine (SVM) classifier, we can accomplish a correct classification rate of 86 percent for the identification of interaction patterns, an accuracy obtained by fusing a selected set of most informative features by Minimum Redundancy Maximum Relevance (mRMR) feature selection method.


world haptics conference | 2013

Haptic stylus with inertial and vibro-tactile feedback

Atakan Arasan; Cagatay Basdogan; Tevfik Metin Sezgin

In this paper, we introduce a novel stylus capable of displaying two haptic effects to the user. The first effect is a tactile flow effect up and down along the pen, and the other is a rotation effect about the long axis of the pen. The flow effect is based on the haptic illusion of “apparent tactile motion”, while the rotation effect comes from the reaction torque created by an electric motor placed along the stylus shaft. The stylus is embedded with two vibration actuators at the ends, and a DC motor with a rotating balanced mass in the middle. We show that, it is possible to create flow and rotation effects on the stylus by driving the actuators on the stylus. Furthermore, we show that the timing and the actuation patterns of the vibration actuators and DC motor on the stylus significantly affect the discernibility of the synthesized perceptions; hence these parameters should be selected carefully. Two psychophysical experiments, each performed with 10 subjects, shed light on the discernability of the two haptic effects as a function of various actuation parameters. Our results show that, with carefully selected parameters, the subjects can successfully identify the flow of motion and the direction of rotation with high accuracies.


International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 2015

Gaze-based prediction of pen-based virtual interaction tasks☆

Çağla Çığ; Tevfik Metin Sezgin

Abstract In typical human–computer interaction, users convey their intentions through traditional input devices (e.g. keyboards, mice, joysticks) coupled with standard graphical user interface elements. Recently, pen-based interaction has emerged as a more intuitive alternative to these traditional means. However, existing pen-based systems are limited by the fact that they rely heavily on auxiliary mode switching mechanisms during interaction (e.g. hard or soft modifier keys, buttons, menus). In this paper, we describe how eye gaze movements that naturally occur during pen-based interaction can be used to reduce dependency on explicit mode selection mechanisms in pen-based systems. In particular, we show that a range of virtual manipulation commands, that would otherwise require auxiliary mode switching elements, can be issued with an 88% success rate with the aid of users׳ natural eye gaze behavior during pen-only interaction.


international conference on multimodal interfaces | 2009

Multimodal inference for driver-vehicle interaction

Tevfik Metin Sezgin; Ian Davies; Peter Robinson

In this paper we present a novel system for driver-vehicle interaction which combines speech recognition with facial-expression recognition to increase intention recognition accuracy in the presence of engine- and road-noise. Our system would allow drivers to interact with in-car devices such as satellite navigation and other telematic or control systems. We describe a pilot study and experiment in which we tested the system, and show that multimodal fusion of speech and facial expression recognition provides higher accuracy than either would do alone.

Collaboration


Dive into the Tevfik Metin Sezgin's collaboration.

Top Co-Authors

Avatar

Randall Davis

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge