Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alex Daniel Edgcomb is active.

Publication


Featured researches published by Alex Daniel Edgcomb.


international conference of the ieee engineering in medicine and biology society | 2012

Automated fall detection on privacy-enhanced video

Alex Daniel Edgcomb; Frank Vahid

A privacy-enhanced video obscures the appearance of a person in the video. We consider four privacy enhancements: blurring of the person, silhouetting of the person, covering the person with a graphical box, and covering the person with a graphical oval. We demonstrate that an automated video-based fall detection algorithm can be as accurate on privacy-enhanced video as on raw video. The algorithm operated on video from a stationary in-home camera, using a foreground-background segmentation algorithm to extract a minimum bounding rectangle (MBR) around the motion in the video, and using time series shapelet analysis on the height and width of the rectangle to detect falls. We report accuracy applying fall detection on 23 scenarios depicted as raw video and privacy-enhanced videos involving a sole actor portraying normal activities and various falls. We found that fall detection on privacy-enhanced video, except for the common approach of blurring of the person, was competitive with raw video, and in particular that the graphical oval privacy enhancement yielded the same accuracy as raw video, namely 0.91 sensitivity and 0.92 specificity.


ACM SIGHIT Record | 2012

Privacy perception and fall detection accuracy for in-home video assistive monitoring with privacy enhancements

Alex Daniel Edgcomb; Frank Vahid

Video of in-home activity provides valuable information for assistive monitoring but raises privacy concerns. Raw video can be privacy-enhanced by obscuring the appearance of a person. We consider five privacy enhancements: blur, silhouette, oval, box, and trailing-arrows. We investigate whether a privacy enhancement exists that provides sufficient perceived privacy while enabling accurate fall detection by humans. We recorded 23 1-minute videos involving normal household activities, falling, and lying on the floor after an earlier fall, and created versions of each video for each privacy setting. We conducted an experiment with 376 undergraduate, non-engineering student participants to measure perceived privacy protection and the participants fall detection accuracy for each privacy setting. Results indicate that the oval provides sufficient perceived privacy for 88% of participants while still supporting fall detection accuracy of 89%, and that the common privacy enhancements blur and silhouette were perceived to provide insufficient privacy.


ieee international conference on healthcare informatics | 2013

Estimating Daily Energy Expenditure from Video for Assistive Monitoring

Alex Daniel Edgcomb; Frank Vahid

Automatically estimating a persons energy expenditure has numerous uses, including ensuring sufficient daily activity by an elderly live-alone person, such activity shown to have numerous benefits. Most previous work requires a person to wear a sensor device. We introduce a video-based activity level estimation technique to take advantage of increasingly-common in-home camera systems. We consider several features of a motion bounding rectangle for such estimation, including changes in height and width, and vertical and horizontal velocities and accelerations. Experiments involved 36 recordings of normal household activity, such as reading while seated, sweeping, and light exercising, involving 4 different actors. Results show, somewhat surprisingly, that the feature horizontal acceleration leads to an activity level estimation fidelity of 0.994 correlation with a commercial BodyBugg body-worn energy measurement device. Furthermore, the approach yielded 90.9% average accuracy of energy expenditure.


frontiers in education conference | 2015

Students learn more with less text that covers the same core topics

Alex Daniel Edgcomb; Frank Vahid; Roman L. Lysecky

For textbooks on technical topics, the typical amount of text used is more than what many college students will read. Some teachers observe, and students report, that students commonly skim such text. As such, a writing style that aggressively minimizes text while still teaching the core technical topic may improve student learning; if text is short enough, students may then read and study the text more carefully. The objective of this study was to compare the effect of text quantity on amount learned. We created and compared content styles using a lesson that taught Google search techniques. The two main content styles were normal text and minimal text. The normal text style included 6-12 sentences followed by 1-3 examples. The minimal text style included 1-2 sentences followed by 1-3 examples. We conducted a randomized control study with 168 participants enrolled in a college-level Introduction to Computing course for non-computing majors. Each participant was randomly assigned one lesson style. We provided a pre-lesson and post-lesson quiz, each with ten questions. Additionally, the participants completed background and follow-up surveys. The study was part of a course homework assignment, so self-selection bias was limited. The course is primarily taken by non-majors and covers the basics of Word, Excel, and HTML. An improvement score is a participants post-lesson minus pre-lesson quiz scores. The average improvement score for minimal text was 2.4 (6.5 - 4.1), which is higher (p-value <; 0.01) than the average improvement score for normal text of 1.1 (5.1 - 4.0). Thus, teaching the same topic using less text led to more learning. The conclusion is not that materials should be watered down, but rather that great attention should be paid to using minimal text while teaching the same core topics.


ieee international conference on healthcare informatics | 2013

Automated In-Home Assistive Monitoring with Privacy-Enhanced Video

Alex Daniel Edgcomb; Frank Vahid

A privacy-enhanced video obscures the appearance of a person in the video. We consider four privacy enhancements: person blurred, person silhouetted, person covered with a bounding-oval, and person covered by a bounding-box. We demonstrate that privacy-enhanced video can be as accurate as raw video for eight in-home assistive monitoring goals: energy expenditure estimation, in room too long, leave but not return at night, arisen in morning, not arisen in morning, in region too long, abnormally inactive during day, and fall detection. Each monitoring goals solution was trained using one actor and tested using two different actors. The privacy enhancements of silhouette, bounding-oval, and bounding-box, did not degrade achievement of the eight assistive monitoring goals. Raw video had a fidelity of 0.994 for the goal of energy expenditure estimation, while silhouette had 0.995, bounding-oval had 0.994, and bounding-box had 0.997. The fall detection algorithm yielded the same sensitivity of 0.91 and specificity of 0.92 for raw and bounding-oval video, while silhouette had a sensitivity of 0.91 and specificity of 0.75, and bounding-box had a sensitivity of 0.82 and specificity of 0.92. The other 6 goals yielded perfect sensitivity and specificity for raw and privacy-enhanced video, with the exception of blur videos sensitivity of 0.5 in region too long.


acm transactions on management information systems | 2013

Accurate and Efficient Algorithms that Adapt to Privacy-Enhanced Video for Improved Assistive Monitoring

Alex Daniel Edgcomb; Frank Vahid

Automated monitoring algorithms operating on live video streamed from a home can effectively aid in several assistive monitoring goals, such as detecting falls or estimating daily energy expenditure. Use of video raises obvious privacy concerns. Several privacy enhancements have been proposed such as modifying a person in video by introducing blur, silhouette, or bounding-box. Person extraction is fundamental in video-based assistive monitoring and degraded in the presence of privacy enhancements; however, privacy enhancements have characteristics that can opportunistically be adapted to. We propose two adaptive algorithms for improving assistive monitoring goal performance with privacy-enhanced video: specific-color hunter and edge-void filler. A nonadaptive algorithm, foregrounding, is used as the default algorithm for the adaptive algorithms. We compare nonadaptive and adaptive algorithms with 5 common privacy enhancements on the effectiveness of 8 automated monitoring goals. The nonadaptive algorithm performance on privacy-enhanced video is degraded from raw video. However, adaptive algorithms can compensate for the degradation. Energy estimation accuracy in our tests degraded from 90.9% to 83.9%, but the adaptive algorithms significantly compensated by bringing the accuracy up to 87.1%. Similarly, fall detection accuracy degraded from 1.0 sensitivity to 0.86 and from 1.0 specificity to 0.79, but the adaptive algorithms compensated accuracy back to 0.92 sensitivity and 0.90 specificity. Additionally, the adaptive algorithms were computationally more efficient than the nonadaptive algorithm, averaging 1.7% more frames processed per second.


international health informatics symposium | 2012

MNFL: the monitoring and notification flow language for assistive monitoring

Alex Daniel Edgcomb; Frank Vahid

Assistive monitoring analyzes data from sensors and cameras to detect situations of interest, and notifies appropriate persons in response. Customization of assistive technology by end-users is necessary for technology adoption and retention. We introduce MNFL, the Monitoring and Notification Flow Language, developed over the past several years to allow lay people without programming experience, but with some technical acumen, to effectively program customized monitoring and notification systems. MNFL is a graphical flow language having intuitive yet sufficiently powerful execution semantics and built-in constructs for assistive monitoring. We describe the languages semantics and built-in constructs, demonstrate the languages use for customizing several common assistive monitoring tasks, and provide results of initial usability trials showing that lay people with almost no training on MNFL can more than 50% of the time and in just a few minutes select and connect the right 1-2 blocks to complete basic applications that have 4-5 blocks total.


Proceedings of the 2nd Conference on Wireless Health | 2011

Feature extractors for integration of cameras and sensors during end-user programming of assistive monitoring systems

Alex Daniel Edgcomb; Frank Vahid

Assistive monitoring systems increasingly include cameras along with sensors. End-users require the capability to program such systems to monitor user-specified events and provide customized notifications in response. We introduce feature extractors, which provide a means for integrating camera video with sensor data. A feature extractor takes a video stream as input, and outputs a stream of integer values corresponding to the amount of a particular sensor phenomenon such as motion, sound, or light, or of more advanced phenomena such as human motion, screams, or falls. Feature extractors provide an elegant means for end-users to integrate cameras into their monitoring programs. We introduce feature extractors, provide examples illustrating their effectiveness for various common assistive monitoring scenarios, and summarize usability trials with 51 lay users demonstrating 56%-96% correct utilization of feature extractors.


technical symposium on computer science education | 2018

Python Versus C++: An Analysis of Student Struggle on Small Coding Exercises in Introductory Programming Courses

Nabeel Alzahrani; Frank Vahid; Alex Daniel Edgcomb; Kevin Nguyen; Roman L. Lysecky

Many teachers of CS 1 (introductory programming) have switched to Python rather than C, C++, or Java. One reason is the belief that Pythons interpreted nature plus simpler syntax and semantics ease a students learning, but data supporting that belief is scarce. This paper addresses the question: Do Python learners struggle less than C++ learners? We analyzed student submissions on small coding exercises in CS 1 courses at 20 different universities, 10 courses using Python, and 11 using C++. Each course used either the Python or C++ version of an online textbook from one publisher, each book having 100+ small coding exercises, expected to take 2-5 minutes each. We considered 11 exercises whose Python and C++ versions were nearly identical and that appeared in various chapters. We defined struggle rate for exercises, where struggle means a student spent excessive time or attempts on an exercise. Based on that rate, we found the learning for Python was not eased; in fact, Python students had significantly higher struggle rates than C++ students (26% vs. 13%). Higher rates were seen even when considering only classes with no prerequisites, classes for majors only, or classes for non-majors only. We encourage the community to do further analyses, to help guide teachers when choosing a CS 1 language.


technical symposium on computer science education | 2018

Interactive, Language-neutral Flowcharts and Pseudocode for Teaching Core CS0/1 Programming Concepts: (Abstract Only)

Alex Daniel Edgcomb; Frank Vahid

Introductory programming courses often use a full-featured programming language, such as Python, Java, or C++, wherein students concurrently learn programming concepts along with language syntax. However, many instructors believe that learning programming concepts first, then learning a specific languages syntax, may be more effective than learning both concurrently. Thus, some courses first teach programming via flowcharts and pseudocode. Some tools and materials support teaching programming via flowcharts, but we felt much improvement was needed. Therefore, we developed a new flowchart language, named Coral-Charts, specifically intended to teach fundamental programming constructs like assignments, branches, loops, functions, and arrays. We developed a web-based graphical simulator for Coral-Charts; no local tool installation is necessary (unlike the most common existing flowchart tool). The simulator always displays the values of variables, which helps students comprehend the impact of statements. The simulator enforces a layout that intentionally mirrors textual codes top-to-bottom execution and sub-statement indentation, easing the transition to a textual language. Furthermore, we defined a new pseudocode-like language, named Coral (corallanguage.org), that is executable and that matches Coral-Charts. Syntax is ultra-simple and only essential constructs are included. Certain features automatically detect or eliminate many new-learner errors. Students can type Coral code, from which a Coral-Charts flowchart is auto-generated, and students can execute both the code and flowcharts. Coral was carefully designed to naturally lead into languages Python, Java, or C++. Coral and Coral-Charts are used in the textbook Fundamental Programming Concepts (zybooks.com/catalog/fundamental-programming-concepts). We welcome feedback on the approach and potential collaborators in implementing experiments.

Collaboration


Dive into the Alex Daniel Edgcomb's collaboration.

Top Co-Authors

Avatar

Frank Vahid

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

A. Knoesen

University of California

View shared research outputs
Top Co-Authors

Avatar

Bailey Miller

University of California

View shared research outputs
Top Co-Authors

Avatar

Kevin Nguyen

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Scott Sirowy

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge