Jared Alan Frank
New York University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jared Alan Frank.
IEEE Control Systems Magazine | 2014
Jared Alan Frank; Vikram Kapila
As mobile devices begin to dominate the market and as laboratory equipment is increasingly connected to the Internet, there is an opportunity for laboratory instructors to recognize and respond to these changes by developing mobile apps for interacting with the equipment. Most reported incorporation of mobile devices thus far have been in virtual laboratories that use software to simulate experiments. Although such laboratories have the benefit of allowing students to learn from mistakes without damaging remote equipment, student learning can be limited by denying them access to real data from real equipment. The primary contribution of this article is to outline the development of several mobile apps for interacting with real physical test beds in an automatic control laboratory, both in the laboratory and remotely. Results are provided from a preliminary investigation of the usability and user experience associated with the applications, using students who would ultimately be the users of the applications when they are integrated into the curriculum. Implementation considerations such as the application development, wireless communication, hardware interfaces, and control are discussed. A microcontroller-based interface to the laboratory hardware provides a low-cost solution that can monitor, command, and control experiments via mobile devices.
Computers in Education | 2017
Jared Alan Frank; Vikram Kapila
Even as mobile devices have become increasingly powerful and popular among learners and instructors alike, research involving their comprehensive integration into educational laboratory activities remains largely unexplored. This paper discusses efforts to integrate vision-based measurement and control, augmented reality (AR), and multi-touch interaction on mobile devices in the development of Mixed-Reality Learning Environments (MRLE) that enhance interactions with laboratory test-beds for science and engineering education. A learner points her device at a laboratory test-bed fitted with visual markers while a mobile application supplies a live view of the experiment augmented with interactive media that aid in the visualization of concepts and promote learner engagement. As the learner manipulates the augmented media, her gestures are mapped to commands that alter the behavior of the test-bed on the fly. Running in the background of the mobile application are algorithms performing vision-based estimation and wireless control of the test-bed. In this way, the sensing, storage, computation, and communication (SSCC) capabilities of mobile devices are leveraged to relieve the need for laboratory-grade equipment, improving the cost-effectiveness and portability of platforms to conduct hands-on laboratories. We hypothesize that students using the MRLE platform demonstrate improvement in their knowledge of dynamic systems and control concepts and have generally favorable experiences using the platform. To validate the hypotheses concerning the educational effectiveness and user experience of the MRLEs, an evaluation was conducted with two classes of undergraduate students using an illustrative platform incorporating a tablet computer and motor test-bed to teach concepts of dynamic systems and control. Results of the evaluation validate the hypotheses. The benefits and drawbacks of the MRLEs observed throughout the study are discussed with respect to the traditional hands-on, virtual, and remote laboratory formats. Mobile devices and test-beds can be integrated according to a novel lab education paradigm.Vision-based measurement and control, AR, and touchscreen enhance lab interactions.The proposed paradigm can offer the benefits of hands-on, virtual, and remote labs.An implementation is developed using an iPad and a motor test-bed to teach control.Evaluation with students validates the implementations educational effectiveness.
international conference on informatics in control automation and robotics | 2015
Jared Alan Frank; José Antonio De Gracia Gómez; Vikram Kapila
Although the onboard cameras of smart devices have been used in the monitoring and teleoperation of physical systems such as robots, their use in the vision-based feedback control of such systems remains to be fully explored. In this paper, we discuss an approach to control a ball and beam test-bed using visual feedback from a smart device with its camera pointed at the test-bed. The computation of a homography between the frames of a live video and a reference image allows the smart device to accurately estimate the state of the test-bed while facing the test-bed from any perspective. Augmented reality is incorporated in the development of an interactive user interface on the smart device that allows users to command the position of the ball on the beam by tapping their fingers at the desired location on the touchscreen. Experiments using a tablet are performed to characterize the noise of vision-based measurements and to illustrate the performance of the closed-loop control system.
Volume 3: Engineering Systems; Heat Transfer and Thermal Engineering; Materials and Tribology; Mechatronics; Robotics | 2014
David Alberto Lopez; Jared Alan Frank; Vikram Kapila
As mobile robots experience increased commercialization, development of intuitive interfaces for human-robot interaction gains paramount importance to promote pervasive adoption of such robots in society. Although smart devices may be useful to operate robots, prior research has not fully investigated the appropriateness of various interaction elements (e.g., touch, gestures, sensors, etc.) to render an effective human-robot interface. This paper provides overviews of a mobile manipulator and a tablet-based application to operate the mobile manipulator. In particular, a mobile manipulator is designed to navigate an obstacle course and to pick and place objects around the course, all under the control of a human operator who uses a tablet-based application. The tablet application provides the user live videos that are captured and streamed by a camera onboard the robot and an overhead camera. In addition, to remotely operate the mobile manipulator, the tablet application provides the user a menu of four interface element options, including, virtual buttons, virtual joysticks, touchscreen gesture, and tilting the device. To evaluate the intuitiveness of the four interface elements for operating the mobile manipulator, a user study is conducted in which participants’ performance is monitored as they operate the mobile manipulator using the designed interfaces. The analysis of the user study shows that the tablet-based application allows even non-experienced users to operate the mobile manipulator without the need for extensive training.Copyright
advances in computing and communications | 2016
Anthony Brill; Jared Alan Frank; Vikram Kapila
Mobile technology is developing and impacting society at an accelerating pace. Since their release in 2007, over one billion smartphones have reshaped the daily lives of their users and their embedded technologies have become increasingly more powerful and miniaturized with each new model. Yet, the majority of the most popular uses of these devices do not take full advantage of their sensing, storage, computation, and communication (SSCC) capabilities. In this paper, we consider an experimental setup in which a smartphone is mounted to a ball and beam system using a 3D-printed mounting structure attached at each end of the beam. To perform feedback control of the ball and beam system, the smartphones inertial and camera sensors are used to measure the angular orientation and velocity of the beam and translational position of the ball on the beam. To account for the nonlinear effects added to the system by the presence of the smartphone and its mounting structure, a feedback linearizing controller is used to stabilize the system. Simulation and experimental results are presented to show that smartphones and their various sensors can be integrated in the wireless sensing and control of physical systems as part of an emerging class of smartphone-mounted test-beds for research and education.
intelligent user interfaces | 2015
Jared Alan Frank; Vikram Kapila
Enabling natural and intuitive communication with robots calls for the design of intelligent user interfaces. As robots are introduced into applications with novice users, the information obtained from such users may not always be reliable. This paper describes a user interface approach to process and correct intended paths for robot navigation as sketched by users on a touchscreen. Our approach demonstrates that by processing video frames from an overhead camera and by using composite Bézier curves to interpolate smooth paths from a small set of significant points, low-resolution occupancy grid maps (OGMs) with numeric potential fields can be continuously updated to correct unsafe user-drawn paths at interactive speeds. The approach generates sufficiently complex paths that appear to bend around static and dynamic obstacles. The results of an evaluation study show that our approach captures the user intent while relieving the user from being concerned about her path-drawing abilities.
indian control conference | 2016
Jared Alan Frank; Anthony Brill; Vikram Kapila
The use of augmented reality (AR) and mobile applications has recently been investigated in the teaching of advanced concepts and training of skills in a variety of fields. By developing educational mobile applications that incorporate augmented reality, unique interactive learning experiences can be provided to learners on their personal smartphones and tablet computers. This paper presents the development of an immersive user interface on a tablet device that can be used by engineering students to interact with a motor test-bed as they examine the effects of discrete-time pole locations on the closed-loop dynamic response of the test-bed. Specifically, users point the rear-facing camera of the tablet at the test-bed on which colored markers are affixed to enable an image processing routine running on the tablet to measure the angular position of an arm attached to the motor. To perform vision-based control of the angular position of motor arm, a discrete-time Kalman filter and a full-state feedback controller are implemented in the background of the application. As the user taps on the touchscreen of the device, s/he adjusts the angular position of a 3D semi-transparent virtual arm that represents the set point to the system. An interactive pole-zero plot allows users to tap at any desired location for the closed-loop pole-placement, in turn triggering the application code to redesign a new controller for driving the test-bed. Real-time plots enable the user to explore the resulting closed-loop response of the test-bed. Experimental results show several responses of the test-bed to demonstrate the efficacy of the proposed system.
advances in computing and communications | 2016
Anthony Brill; Jared Alan Frank; Vikram Kapila
Over the last several decades, visual servoing-a vision-based control approach-has been explored as a popular and inexpensive contactless measurement alternative for a variety of industrial robotic systems. Moreover, recent years have witnessed rapid advancements in mobile device applications that have pushed the envelope in the capabilities of smartphone cameras. In this paper, an eye-in-hand pose-based visual servoing (PBVS) approach is presented wherein a smartphone mounted to an inverted pendulum on cart (IPC) system measures both the translational position of a motorized cart and the angular orientation of a pendulum arm for the purpose of feedback control. To perform vision-based control of the IPC system, a discrete-time linear quadratic gaussian (LQG) controller is implemented. Experimental results are presented to characterize the relationships between the frame rate and image resolution of the smartphone camera, processing and wireless communication delays, the measurement noise, and the performance of the closed-loop system.
international conference on robotics and automation | 2017
Jared Alan Frank; Sai Prasanth Krishnamoorthy; Vikram Kapila
Although human-multi-robot systems have received increased attention in recent years, current implementations rely on structured environments and utilize specialized, research-grade hardware to operate. This letter presents approaches that leverage the visual and inertial sensing of mobile devices to address the estimation and control challenges of multi-robot systems that function in shared spaces with human operators such that both the mobile device camera and robots can move freely in the environment. It is shown that a subset of robots in the system can be used to maintain a reference frame that facilitates tracking and control of the remaining robots to perform tasks, such as object retrieval, using an operators mobile device as the only sensing and computational platform in the system. To evaluate the performance of the proposed approaches, experiments are conducted in which a system of mobile robots is commanded to retrieve objects in an environment. Results show that, compared to using the visual data alone, integrating both the visual and inertial data from mobile devices yields improvements in performance, flexibility, and computational efficiency in implementing human-multi-robot systems.
Archive | 2017
Jared Alan Frank; Vikram Kapila
The embedded technologies integrated into smart mobile devices are becoming increasingly more powerful and being applied to solve disparate societal problems in unprecedented new ways. Billions of smartphones and tablet computers have already reshaped the daily lives of users, and efforts are currently underway to introduce mobile devices to some of the most remote and impoverished areas of the world. With an ever-expanding list of sensors and features, smartphones and tablet computers are now more capable than ever of enhancing not only our interactions with software and with each other, but with the physical world as well. To utilize smart mobile devices at the center of rich human-in-the-loop cyber-physical systems, their sensing, storage, computation, and communication (SSCC) capabilities must be examined from a mechatronics perspective rather than the contexts in which they are conventionally treated (e.g., messaging, surfing the web, playing games, navigation, and social networking). In this chapter, we discuss how state of the art mobile technologies may be integrated into human-in-the-loop cyber-physical systems and exploited to provide natural mappings for remote interactions with such systems. A demonstrative example is used to show how an intuitive metaphor is uncovered for performing a balancing task through the teleoperation of a ball and beam test-bed.