Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sarah Ebling is active.

Publication


Featured researches published by Sarah Ebling.


conference on computers and accessibility | 2015

Demographic and Experiential Factors Influencing Acceptance of Sign Language Animation by Deaf Users

Hernisa Kacorri; Matt Huenerfauth; Sarah Ebling; Kasmira Patel; Mackenzie Willard

Technology to automatically synthesize linguistically accurate and natural-looking animations of American Sign Language (ASL) from an easy-to-update script would make it easier to add ASL content to websites and media, thereby increasing information accessibility for many people who are deaf. Researchers evaluate their sign language animation systems by collecting subjective judgments and comprehension-question responses from deaf participants. Through a survey (N=62) and multiple regression analysis, we identified relationships between (a) demographic and technology experience/attitude characteristics of participants and (b) the subjective and objective scores collected from them during the evaluation of sign language animation systems. This finding suggests that it would be important for researchers to collect and report these characteristics of their participants in publications about their studies, but there is currently no consensus in the field. We present a set of questions in ASL and English that can be used by researchers to measure these participant characteristics; reporting such data would enable researchers to better interpret and compare results from studies with different participant pools.


Universal Access in The Information Society | 2016

Building a Swiss German Sign Language avatar with JASigning and evaluating it among the Deaf community

Sarah Ebling; John R. W. Glauert

AbstractThis paper reports on the development of a system that translates German train announcements of the Swiss Federal Railways (Schweizerische Bundesbahnen, SBB) into Swiss German Sign Language (Deutschschweizerische Gebärdensprache, DSGS) in real time and displays the result via an avatar. The system used to animate the avatar is called JASigning. Deliverables of the projects during which JASigning was developed are the main source of documentation for the system along with notes on the Web site. Not all planned features have been fully implemented: some because they are used very infrequently; others because there is insufficient linguistic research on which to base an implementation. The team of hearing and Deaf researchers identified the avatar functionality needed for the project and built a first version of the avatar. A focus group study with seven Deaf signers was then carried out to obtain feedback on how to further improve the avatar. This paper reports the evaluation results. It also discusses the workarounds introduced for features that were not yet directly available in the JASigning system. These features were not specific to train announcements. Hence, knowledge of how to achieve their designated effects in JASigning can be useful to persons working with other types of sign language data as well.


conference of the international speech communication association | 2015

Synthesizing the finger alphabet of Swiss German Sign Language and evaluating the comprehensibility of the resulting animations

Sarah Ebling; Rosalee Wolfe; Jerry Schnepp; Souad Baowidan; John C. McDonald; Robyn Moncrief; Sandra Sidler-Miserez; Katja Tissi

This paper reports on work in synthesizing the finger alphabet of Swiss German Sign Language (Deutschschweizerische Gebardensprache, DSGS) asafirst step towards afingerspelling learning tool for this language. Sign language synthesis is an instance of automatic sign language processing, which in turn forms part of natural language processing (NLP). The contribution of this paper is twofold: Firstly, the process of creating a set of hand postures and transitions for the DSGS finger alphabet is explained, and secondly, the results of a study assessing the comprehensibility of the resulting animations are reported. The comprehension rate of the signing avatar was highly satisfactory at 90.06%.


ACM Transactions on Accessible Computing | 2017

Regression Analysis of Demographic and Technology-Experience Factors Influencing Acceptance of Sign Language Animation

Hernisa Kacorri; Matt Huenerfauth; Sarah Ebling; Kasmira Patel; Kellie Menzies; Mackenzie Willard

Software for automating the creation of linguistically accurate and natural-looking animations of American Sign Language (ASL) could increase information accessibility for many people who are deaf. As compared to recording and updating videos of human ASL signers, technology for automatically producing animation from an easy-to-update script would make maintaining ASL content on websites more efficient. Most sign language animation researchers evaluate their systems by collecting subjective judgments and comprehension-question responses from deaf participants. Through a survey (N = 62) and multiple-regression analysis, we identified relationships between (a) demographic and technology-experience characteristics of participants and (b) the subjective and objective scores collected from them during the evaluation of sign language animation systems. These relationships were experimentally verified in a subsequent user study with 57 participants, which demonstrated that specific subpopulations have higher comprehension or subjective scores when viewing sign language animations in an evaluation study. This finding indicates that researchers should collect and report a set of specific characteristics about participants in any publications describing evaluation studies of their technology, a practice that is not yet currently standard among researchers working in this field. In addition to investigating this relationship between participant characteristics and study results, we have also released our survey questions in ASL and English that can be used to measure these participant characteristics, to encourage reporting of such data in future studies. Such reporting would enable researchers in the field to better interpret and compare results between studies with different participant pools.


conference of the international speech communication association | 2015

Bridging the gap between sign language machine translation and sign language animation using sequence classification

Sarah Ebling; Matt Huenerfauth

To date, the non-manual components of signed utterances have rarely been considered in automatic sign language translation. However, these components are capable of carrying important linguistic information. This paper presents work that bridges the gap between the output of a sign language translation system and the input of a sign language animation system by incorporating non-manual information into the final output of the translation system. More precisely, the generation of non-manual information is scheduled after the machine translation step and treated as a sequence classification task. While sequence classification has been used to solve automatic spoken language processing tasks, we believe this to be the first work to apply it to the generation of non-manual information in sign languages. All of our experimental approaches outperformed lower baseline approaches, consisting of unigram or bigram models of non-manual features.


meeting of the association for computational linguistics | 2016

An Open Web Platform for Rule-Based Speech-to-Sign Translation.

Manny Rayner; Pierrette Bouillon; Sarah Ebling; Johanna Gerlach; Irene Strasly; Nikos Tsourakis

We present an open web platform for developing, compiling, and running rulebased speech to sign language translation applications. Speech recognition is performed using the Nuance Recognizer 10.2 toolkit, and signed output, including both manual and non-manual components, is rendered using the JASigning avatar system. The platform is designed to make the component technologies readily accessible to sign language experts who are not necessarily computer scientists. Translation grammars are written in a version of Synchronous Context-Free Grammar adapted to the peculiarities of sign language. All processing is carried out on a remote server, with content uploaded and accessed through a web interface. Initial experiences show that simple translation grammars can be implemented on a time-scale of a few hours to a few days and produce signed output readily comprehensible to Deaf informants. Overall, the platform drastically lowers the barrier to entry for researchers interested in building applications that generate high-quality signed language.


language and technology conference | 2011

Digging for names in the mountains: Combined person name recognition and reference resolution for German alpine texts

Sarah Ebling; Rico Sennrich; David Klaper

In this paper, we introduce a module that combines person name recognition and reference resolution for German. Our data consists of a corpus of Alpine texts. This text type poses specific challenges because of a multitude of toponyms, some of which interfere with person names. Our reference resolution algorithm outputs person entities based on their last names and first names along with their associated features (jobs, addresses, academic titles).


international conference on computers for handicapped persons | 2014

Building an Application for Learning the Finger Alphabet of Swiss German Sign Language through Use of the Kinect

Phuoc Loc Nguyen; Vivienne Falk; Sarah Ebling

We developed an application for learning the finger alphabet of Swiss German Sign Language. It consists of a user interface and a recognition algorithm including the Kinect sensor. The official Kinect Software Development Kit (SDK) does not recognize fingertips. We extended it with an existing algorithm.


international conference on universal access in human-computer interaction | 2017

Evaluation of Animated Swiss German Sign Language Fingerspelling Sequences and Signs

Sarah Ebling; Sarah Johnson; Rosalee Wolfe; Robyn Moncrief; John C. McDonald; Souad Baowidan; Tobias Haug; Sandra Sidler-Miserez; Katja Tissi

This paper reports on work in animating Swiss German Sign Language (DSGS) fingerspelling sequences and signs as well as on the results of a study evaluating the acceptance of the animations. The animated fingerspelling sequences form part of a fingerspelling learning tool for DSGS, while the animated signs are to be used in a study exploring the potential of sign language avatars in sign language assessment. To evaluate the DSGS fingerspelling sequences and signs, we conducted a focus group study with seven early learners of DSGS. We identified the following aspects of the animations as requiring improvement: non-manual features (in particular, facial expressions and head and shoulder movements), (fluidity of) manual movements, and hand positions of fingerspelling signs.


international conference on computers helping people with special needs | 2016

A Web Application for Geolocalized Signs in Synthesized Swiss German Sign Language

Anna Jancso; Xi Rao; Johannes Graën; Sarah Ebling

In this paper, we report on the development of a web application that displays Swiss German Sign Language (DSGS) signs for places with train stations in Switzerland in synthesized form, i.e., by means of a signing avatar. Ours is the first platform to make DSGS place name signs accessible in geolocalized form, i.e., by linking them to a map, and to use synthesized signing. The latter mode of display is advantageous over videos of human signers, since place name signs for any sign language are subject to language change. Our web application targets both deaf and hearing DSGS users. The underlying programming code is freely available. The application can be extended to display any kind of geolocalized data in any sign language.

Collaboration


Dive into the Sarah Ebling's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matt Huenerfauth

Rochester Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hernisa Kacorri

City University of New York

View shared research outputs
Top Co-Authors

Avatar

Kasmira Patel

Rochester Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Mackenzie Willard

Rochester Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge