Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where nan Vinayak is active.

Publication


Featured researches published by nan Vinayak.


user interface software and technology | 2012

Extended multitouch: recovering touch posture and differentiating users using a depth camera

Sundar Murugappan; Vinayak; Niklas Elmqvist; Karthik Ramani

Multitouch surfaces are becoming prevalent, but most existing technologies are only capable of detecting the users actual points of contact on the surface and not the identity, posture, and handedness of the user. In this paper, we define the concept of extended multitouch interaction as a richer input modality that includes all of this information. We further present a practical solution to achieve this on tabletop displays based on mounting a single commodity depth camera above a horizontal surface. This will enable us to not only detect when the surface is being touched, but also recover the users exact finger and hand posture, as well as distinguish between different users and their handedness. We validate our approach using two user studies, and deploy the technique in a scratchpad tool and in a pen + touch sketch tool.


Computer-aided Design | 2015

A gesture-free geometric approach for mid-air expression of design intent in 3D virtual pottery

Vinayak; Karthik Ramani

The advent of depth cameras has enabled mid-air interactions for shape modeling with bare hands. Typically, these interactions employ a finite set of pre-defined hand gestures to allow users to specify modeling operations in virtual space. However, human interactions in real world shaping processes (such as pottery or sculpting) are complex, iterative, and continuous. In this paper, we show that the expression of user intent in shaping processes can be derived from the geometry of contact between the hand and the manipulated object. Specifically, we describe the design and evaluation of a geometric interaction technique for bare-hand mid-air virtual pottery. We model the shaping of a pot as a gradual and progressive convergence of the pots profile to the shape of the users hand represented as a point-cloud (PCL). Thus, a user does not need to learn, know, or remember any gestures to interact with our system. Our choice of pottery simplifies the geometric representation, allowing us to systematically study how users use their hands and fingers to express the intent of deformation during a shaping process. Our evaluations demonstrate that it is possible to enable users to express their intent for shape deformation without the need for a fixed set of gestures for clutching and deforming a shape. Display Omitted We introduce a geometric approach for mid-air virtual pottery design.A user can design virtual pots without the need to remember gestures.The shape of a pot gradually converges to the point-cloud of the users hands.Applications are shown with two depth sensors, Leap Motion and SoftKinetic DepthSense.User evaluation demonstrates strengths and weaknesses of our approach.


Journal of Computing and Information Science in Engineering | 2013

Handy-Potter: Rapid Exploration of Rotationally Symmetric Shapes Through Natural Hand Motions

Vinayak; Sundar Murugappan; Cecil Piya; Karthik Ramani

We present a paradigm for natural and exploratory shape modeling by introducing novel 3D interactions for creating, modifying and manipulating 3D shapes using arms and hands. Though current design tools provide complex modeling functionalities, they remain non-intuitive for novice users. Significant training is required to use these tools since they segregate 3D shapes into hierarchical 2D inputs, binding the user to stringent procedural steps and making modifications cumbersome. On the other hand, the use of CAD systems is typically involved during the final phases of design. This leaves a void in the early design phase wherein the creative exploration is critical. We present a shape creation paradigm as an exploration of creative imagination and externalization of shapes, particularly in the early phases of design. We integrate the capability of humans to express 3D shapes via hand-arm motions with traditional sweep surface representation to demonstrate rapid exploration of a rich variety of 3D shapes. We track the skeleton of users using the depth data provided by low-cost depth sensing camera (KinectTM). Our modeling tool is configurable to provide a variety of implicit constraints for shape symmetry and resolution based on the position, orientation and speed of the arms. An intuitive strategy for shape modifications is also proposed. Our main goal is to help the user to communicate the design intent to the computer with minimal effort. To this end, we conclusively demonstrate the creation of a wide variety of product concepts and show an average modeling time of a only few seconds while retaining the intuitiveness of the design process. ∗Address all correspondence to this author.


tangible and embedded interaction | 2016

MobiSweep: Exploring Spatial Design Ideation Using a Smartphone as a Hand-held Reference Plane

Vinayak; Devarajan Ramanujan; Cecil Piya; Karthik Ramani

In this paper, we explore quick 3D shape composition during early-phase spatial design ideation. Our approach is to re-purpose a smartphone as a hand-held reference plane for creating, modifying, and manipulating 3D sweep surfaces. We implemented MobiSweep, a prototype application to explore a new design space of constrained spatial interactions that combine direct orientation control with indirect position control via well-established multi-touch gestures. MobiSweep leverages kinesthetically aware interactions for the creation of a sweep surface without explicit position tracking. The design concepts generated by users, in conjunction with their feedback, demonstrate the potential of such interactions in enabling spatial ideation.


ASME 2012 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference | 2012

Handy-Potter: Rapid 3D Shape Exploration Through Natural Hand Motions

Vinayak; Sundar Murugappan; Cecil Piya; Karthik Ramani

We present the paradigm of natural and exploratory shape modeling by introducing novel 3D interactions for creating, modifying and manipulating 3D shapes using arms and hands. Though current design tools provide complex modeling functionalities, they remain non-intuitive and require significant training since they segregate 3D shapes into hierarchical 2D inputs, thus binding the user to stringent procedural steps and making modifications cumbersome. In addition the designer knows what to design when they go to CAD systems and the creative exploration in design is lost. We present a shape creation paradigm as an exploration of creative imagination and externalization of shapes, particularly in the early phases of design. We integrate the capability of humans to express 3D shapes via hand-arm motions with traditional sweep surface representation to demonstrate rapid exploration of a rich variety of fairly complex 3D shapes. We track the skeleton of users using the depth data provided by low-cost depth sensing camera (Kinect™). Our modeling tool is configurable to provide a variety of implicit constraints for shape symmetry and resolution based on the position, orientation and speed of the arms. Intuitive strategies for coarse and fine shape modifications are also proposed. We conclusively demonstrate the creation of a wide variety of product concepts and show an average modeling time of a only few seconds while retaining the intuitiveness of communicating the design intent.Copyright


tangible and embedded interaction | 2017

Window-Shaping: 3D Design Ideation by Creating on, Borrowing from, and Looking at the Physical World

Ke Huo; Vinayak; Karthik Ramani

We present, Window-Shaping, a tangible mixed-reality (MR) interaction metaphor for design ideation that allows for the direct creation of 3D shapes on and around physical objects. Using the sketch-and-inflate scheme, our metaphor enables quick design of dimensionally consistent and visually coherent 3D models by borrowing visual and dimensional attributes from existing physical objects without the need for 3D reconstruction or fiducial markers. Through a preliminary evaluation of our prototype application we demonstrate the expressiveness provided by our design workflow, the effectiveness of our interaction scheme, and the potential of our metaphor.


Computers & Graphics | 2016

Extracting hand grasp and motion for intent expression in mid-air shape deformation

Vinayak; Karthik Ramani

We describe the iterative design and evaluation of a geometric interaction technique for bare-hand mid-air virtual pottery. We model the shaping of a pot as a gradual and progressive convergence of the pot-profile to the shape of the users hand represented as a point-cloud (PCL). Our pottery-inspired application served as a platform for systematically revealing how users use their hands to express the intent of deformation during a pot shaping process. Our approach involved three stages: (a) clutching by proximal-attraction, (b) shaping by proximal-attraction, and (c) shaping by grasp+motion. The design and implementation of each stage was informed by user evaluations of the previous stage. Our work evidently demonstrates that it is possible to enable users to express their intent for shape deformation without the need for a fixed set of gestures for clutching and deforming a shape. We found that the expressive capability of hand articulation can be effectively harnessed for controllable shaping by organizing the deformation process in broad classes of intended operations such as pulling, pushing, and fairing. After minimal practice with the pottery application, users could figure out their own strategy for reaching, grasping, and deforming the pot. Users particularly enjoyed using day-to-day physical objects as tools for shaping pots. Graphical abstractGrasp detection from Kernel density.Display Omitted HighlightsWe present iterative algorithm development for mid-air virtual pottery design.We extract grasp and motion using kernel-density of the point-cloud of hands.We use hand grasp and motion for classifying intent towards pots deformation.Users can shape pots with hands as well as physical objects as tools.Evaluation shows increase in controllability in intent expression in pottery.


human factors in computing systems | 2014

zPots: a virtual pottery experience with spatial interactions using the leap motion device

Vinayak; Karthik Ramani; Kevin Sang Lee; Raja Jasti

We present zPots, an application for gesture-free hand-based design of virtual pottery enabled by the Leap Motion device. With zPots, a user can shape and color 3D pots by moving bare hands in the air with minimal or no training. Unlike large-space hand-and-body movements required by depth cameras such as the Kinect, the use of the Leap motion device facilitates close range 3D interactions collocated with the personal computer. We demonstrate our application as a synergistic combination of novel spatial interactions and tool metaphors that cater to engaging and realistic experiences while supporting creativity in 3D shape conceptualization and modeling.


Computer-aided Design | 2012

A vision modeling framework for DHM using geometrically estimated FoV

Vinayak; Dibakar Sen

Digital human modeling (DHM) involves modeling of structure, form and functional capabilities of human users for ergonomics simulation. This paper presents application of geometric procedures for investigating the characteristics of human visual capabilities which are particularly important in the context mentioned above. Using the cone of unrestricted directions through the pupil on a tessellated head model as the geometric interpretation of the clinical field-of-view (FoV), the results obtained are experimentally validated. Estimating the pupil movement for a given gaze direction using Listings Law, FoVs are re-computed. Significant variation of the FoV is observed with the variation in gaze direction. A novel cube-grid representation, which integrated the unit-cube representation of directions and the enhanced slice representation has been introduced for fast and exact point classification for point visibility analysis for a given FoV. Computation of containment frequency of every grid-cell for a given set of FoVs enabled determination of percentile-based FoV contours for estimating the visual performance of a given population. This is a new concept which makes visibility analysis more meaningful from ergonomics point-of-view. The algorithms are fast enough to support interactive analysis of reasonably complex scenes on a typical desktop computer.


symposium on spatial user interaction | 2016

Window-Shaping: 3D Design Ideation in Mixed Reality

Ke Huo; Vinayak; Karthik Ramani

We present, Window-Shaping, a mobile, markerless, mixed-reality (MR) interface for creative design ideation that allows for the direct creation of 3D shapes on and around physical objects. Using the sketch-and-inflate scheme, we present a design workflow where users can create dimensionally consistent and visually coherent 3D models by borrowing visual and dimensional attributes from existing physical objects.

Collaboration


Dive into the nan Vinayak's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dibakar Sen

Indian Institute of Science

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kevin Sang Lee

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Maria C. Yang

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge