Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Scott W. Greenwald is active.

Publication


Featured researches published by Scott W. Greenwald.


mobile and ubiquitous multimedia | 2015

A benchmark for interactive augmented reality instructions for assembly tasks

Markus Funk; Thomas Kosch; Scott W. Greenwald; Albrecht Schmidt

With the opportunity to customize ordered products, assembly tasks are becoming more and more complex. To meet these increased demands, a variety of interactive instruction systems have been introduced. Although these systems may have a big impact on overall efficiency and cost of the manufacturing process, it has been difficult to optimize them in a scientific way. The challenge is to introduce performance metrics that apply across different tasks and find a uniform experiment design. In this paper, we address this challenge by proposing a standardized experiment design for evaluating interactive instructions and making them comparable with each other. Further, we introduce a General Assembly Task Model, which differentiates between task-dependent and task-independent measures. Through a user study with 12 participants, we evaluate the experiment design and the proposed task model using an abstract pick-and-place task and an artificial industrial task. Finally, we provide paper-based instructions for the proposed task as a baseline for evaluating Augmented Reality instructions.


symposium on principles of programming languages | 2010

Reconfigurable asynchronous logic automata: (RALA)

Neil Gershenfeld; David Allen Dalrymple; Kailiang Chen; Ara Knaian; Forrest Green; Erik D. Demaine; Scott W. Greenwald; Peter Schmidt-Nielsen

Computer science has served to insulate programs and programmers from knowledge of the underlying mechanisms used to manipulate information, however this fiction is increasingly hard to maintain as computing devices decrease in size and systems increase in complexity. Manifestations of these limits appearing in computers include scaling issues in interconnect, dissipation, and coding. Reconfigurable Asynchronous Logic Automata (RALA) is an alternative formulation of computation that seeks to align logical and physical descriptions by exposing rather than hiding this underlying reality. Instead of physical units being represented in computer programs only as abstract symbols, RALA is based on a lattice of cells that asynchronously pass state tokens corresponding to physical resources. We introduce the design of RALA, review its relationships to its many progenitors, and discuss its benefits, implementation, programming, and extensions


virtual reality software and technology | 2016

Eye gaze tracking with google cardboard using purkinje images

Scott W. Greenwald; Luke Loreti; Markus Funk; Ronen Zilberman; Pattie Maes

Mobile phone-based Virtual Reality (VR) is rapidly growing as a platform for stereoscopic 3D and non-3D digital content and applications. The ability to track eye gaze in these devices would be a tremendous opportunity on two fronts: firstly, as an interaction technique, where interaction is currently awkward and limited, and secondly, for studying human visual behavior. We propose a method to add eye gaze tracking to these existing devices using their on-board display and camera hardware, with a minor modification to the headset enclosure. We present a proof-of-concept implementation of the technique and show results demonstrating its feasibility. The software we have developed will be made available as open source to benefit the research community.


International Conference on Immersive Learning | 2017

Investigating Social Presence and Communication with Embodied Avatars in Room-Scale Virtual Reality

Scott W. Greenwald; Zhangyuan Wang; Markus Funk; Pattie Maes

Room-scale virtual reality (VR) holds great potential as a medium for communication and collaboration in remote and same-time, same-place settings. Related work has established that movement realism can create a strong sense of social presence, even in the absence of photorealism. Here, we explore the noteworthy attributes of communicative interaction using embodied minimal avatars in room-scale VR in the same-time, same-place setting. Our system is the first in the research community to enable this kind of interaction, as far as we are aware. We carried out an experiment in which pairs of users performed two activities in contrasting variants: VR vs. face-to-face (F2F), and 2D vs. 3D. Objective and subjective measures were used to compare these, including motion analysis, electrodermal activity, questionnaires, retrospective think-aloud protocol, and interviews. On the whole, participants communicated effectively in VR to complete their tasks, and reported a strong sense of social presence. The system’s high fidelity capture and display of movement seems to have been a key factor in supporting this. Our results confirm some expected shortcomings of VR compared to F2F, but also some non-obvious advantages. The limited anthropomorphic properties of the avatars presented some difficulties, but the impact of these varied widely between the activities. In the 2D vs. 3D comparison, the basic affordance of freehand drawing in 3D was new to most participants, resulting in novel observations and open questions. We also present methodological observations across all conditions concerning the measures that did and did not reveal differences between conditions, including unanticipated properties of the think-aloud protocol applied to VR.


international conference on advanced learning technologies | 2016

EVA: Exploratory Learning with Virtual Companions Sharing Attention and Context

Scott W. Greenwald; Markus Funk; Luke Loreti; David Mayo; Pattie Maes

Exploratory Learning with Virtual Companions Sharing Attention and Context (EVA) is a concept for mediated teaching and learning that sits at the intersection of exploratory learning, telepresence, and attention awareness. The companion teacher is informed about the attentional state and environment of the learner, and can refer directly to this environment through marking or annotation. To the learner, the companion is virtual -- either human or automatic -- and, if human, either physically copresent or remote. The content and style of presentation are tailored to the learners momentary level of interest or focus, and her attention can be guided to salient environmental elements (e.g. visual) in order to convey desired information. We define a design space for such systems, which applies to learning in Augmented Reality and Virtual Reality, and can be employed as a framework for design and evaluation. We demonstrate this through trials with two proof-of-concept systems, one in AR and one in VR, with a human companion. We conclude that the EVA design space defines a powerful set of systems for learning and finish by presenting guidelines for making such systems maximally effective.


human factors in computing systems | 2015

TakeTwo: Using Google Glass for Augmented Memory

Scott W. Greenwald; Christian David Vazquez; Pattie Maes

Recent advances in wearable technology create the opportunity for seamless interactions that would be too cumbersome or limited on handheld devices such as cameras or mobile phones. The use of a head-mounted camera and display can allow users to capture and review audiovisual information without disrupting the continuity their ongoing activities. When presented with large amounts of information, people are prone to miss or forget details which can be essential later. TakeTwo builds on the capabilities of such wearable devices to provide a virtual extension of memory, i.e. augmented memory, to aid users in learning and recall. In particular, we use Google Glass to capture audiovisual content of ongoing events, and allow users to actively bookmark moments for later review. The Thalmic Labs Myo armband allows users to create bookmarks with discrete hand gestures. Future work will explore automatic bookmark creation triggered by physiological signals such as electrodermal activity, EEG, eye tracking, and motion. This will allow users to review events based on signals of emotional arousal, confusion, focus, or understanding, furthering their ability to recall and reinforce memory when it is needed most.


Cryptography and Security | 2012

Cryptography with asynchronous logic automata

Peter Schmidt-Nielsen; Kailiang Chen; Jonathan Bachrach; Scott W. Greenwald; Forrest Green; Neil Gershenfeld

We introduce the use of asynchronous logic automata (ALA) for cryptography. ALA aligns the descriptions of hardware and software for portability, programmability, and scalability. An implementation of the A5/1 stream cipher is provided as a design example in a concise hardware description language, Snap, and we discuss a power- and timing-balanced cell design.


international conference on computer graphics and interactive techniques | 2017

Visualization and labeling of point clouds in virtual reality

Jonathan Dyssel Stets; Yongbin Sun; Wiley Corning; Scott W. Greenwald

We present a Virtual Reality (VR) application for labeling and handling point cloud data sets. A series of room-scale point clouds are recorded as a video sequence using a Microsoft Kinect. The data can be played and paused, and frames can be skipped just like in a video player. The user can walk around and inspect the data while it is playing or paused. Using the tracked hand-held controller, the user can select and label individual parts of the point cloud. The points are highlighted with a color when they are labeled. With a tracking algorithm, the labeled points can be tracked from frame to frame to ease the labeling process. Our sample data is an RGB point cloud recording of two people juggling with pins. Here, the user can select and label, for example, the juggler pins as shown in Figure 1. Each juggler pin is labeled with various colors to indicate different labels.


user interface software and technology | 2015

Responsive Facilitation of Experiential Learning Through Access to Attentional State

Scott W. Greenwald

The planned thesis presents a vision of the future of learning, where learners explore environments, physical and virtual, in a curiosity-driven or intrinsically motivated way, and receive contextual information from a companion facilitator or teacher. Learners are instrumented with sensors that convey their cognitive and attentional state to the companion, who can then accurately judge what is interesting or relevant, and when is a good moment to jump in. I provide a broad definition of the possible types of sensor input as well as the modalities of intervention, and then present a specific proof-of-concept system that uses gaze behavior as a means of communication between the learner and a human companion.


IEEE Pervasive Computing | 2015

Pervasive Interaction Across Displays

Lars Lischke; Dominik Weber; Scott W. Greenwald

Streaming and connecting different devices ubiquitously are key technologies for smart environments. Here, the authors present a few commercially available technologies supporting this and provide an outlook on how displays might become a service themselves.

Collaboration


Dive into the Scott W. Greenwald's collaboration.

Top Co-Authors

Avatar

Pattie Maes

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Wiley Corning

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Forrest Green

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Kailiang Chen

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Markus Funk

University of Stuttgart

View shared research outputs
Top Co-Authors

Avatar

Neil Gershenfeld

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Peter Schmidt-Nielsen

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ara Knaian

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Christian David Vazquez

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

David Allen Dalrymple

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge