Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Vikas Ashok is active.

Publication


Featured researches published by Vikas Ashok.


annual meeting of the special interest group on discourse and dialogue | 2014

Dialogue Act Modeling for Non-Visual Web Access

Vikas Ashok; Yevgen Borodin; Svetlana Stoyanchev; Ramakrishnan

Speech-enabled dialogue systems have the potential to enhance the ease with which blind individuals can interact with the Web beyond what is possible with screen readers - the currently available assistive technology which narrates the textual content on the screen and provides shortcuts to navigate the content. In this paper, we present a dialogue act model towards developing a speech enabled browsing system. The model is based on the corpus data that was collected in a wizard-of-oz study with 24 blind individuals who were assigned a gamut of browsing tasks. The development of the model included extensive experiments with assorted feature sets and classifiers; the outcomes of the experiments and the analysis of the results are presented.


conference on computers and accessibility | 2016

Tactile Accessibility: Does Anyone Need a Haptic Glove?

Andrii Soviak; Anatoliy Borodin; Vikas Ashok; Yevgen Borodin; Yury Puzis; I. V. Ramakrishnan

Graphical user interfaces (GUIs) are widely used on smartphones, tablets, and laptops. While GUIs are convenient for sighted users, their accessibility for blind people, who use screen readers to interact with GUIs, remains to be problematic. Even the most screen-reader accessible GUIs are far less usable for blind people compared to sighted people, because the former group cannot benefit from the geometric layout of GUIs. As a result, blind people often have to listen through a lot of irrelevant content before they find what they are looking for. Haptic interfaces (those providing tactile feedback) have the potential to make GUI interfaces more accessible and usable for blind people. Alas, mainstream computer devices do not have haptic screens that would enable high-resolution tactile feedback, and specialized haptic devices are very limited and/or are exuberantly expensive and bulky. In this paper, we describe a low-cost haptic-glove system, FeelX, which can potentially enable usable tactile interaction with GUIs. The vision of FeelX is to enable blind users to connect it to any computer or smartphone, and then interact with it by moving their hands on any flat surface such as the desk or table. To establish the practicality and the desirability of using haptic gloves, we evaluated the initial prototype of the glove in a user study with 20 blind participants. Throughout the study, we performed a comparative evaluation of several design options for the tactile interface. The participants were asked to identify simple geometric figures such as lines, rectangles, circles, and triangles that are the basic building blocks of any GUI interface. Although the FeelX prototype is far from being a usable product, the results of the study indicate that blind users want to use haptic gloves.


international conference on web engineering | 2014

Widget Classification with Applications to Web Accessibility

Valentyn Melnyk; Vikas Ashok; Yury Puzis; Andrii Soviak; Yevgen Borodin; I. V. Ramakrishnan

Once simple and static, many web pages have now evolved into complex web applications. Hundreds of web development libraries are providing ready-to-use dynamic widgets, which can be further customized to fit the needs of individual web application. With such wide selection of widgets and a lack of standardization, dynamic widgets have proven to be an insurmountable problem for blind users who rely on screen readers to make web pages accessible. Screen readers generally do not recognize widgets that dynamically appear on the screen; as a result, blind users either cannot benefit from the convenience of using widgets (e.g., a date picker) or get stuck on inaccessible content (e.g., alert windows). In this paper, we propose a general approach to identifying or classifying dynamic widgets with the purpose of “reverse engineering” web applications and improving their accessibility. To demonstrate the feasibility of the approach, we report on the experiments that show how very popular dynamic widgets such as date picker, popup menu, suggestion list, and alert window can be effectively and accurately recognized in live web applications.


Proceedings of the 11th Web for All Conference on | 2014

Listen to everything you want to read with Capti narrator

Yevgen Borodin; Yuri Puzis; Andrii Soviak; James Bouker; Bo Feng; Richard Sicoli; Andrii Melnyk; Valentyn Melnyk; Vikas Ashok; Glenn Dausch; I. V. Ramakrishnan

Capti Narrator is a new cross-platform application for convenient, hands-free consumption of digital content, enabling users to listen to news, blogs, documents, unprotected e-books, and more while commuting, cooking, working out, anywhere, anytime. Capti will improve the productivity of students, busy professionals, language learners, people with print disabilities, and anyone else who wants to listen to content instead of reading it from the screen.


Proceedings of the 12th Web for All Conference on | 2015

Capti-speak: a speech-enabled web screen reader

Vikas Ashok; Yevgen Borodin; Yury Puzis; I. V. Ramakrishnan

People with vision impairments interact with web pages via screen readers that provide keyboard shortcuts for navigating through the content. However, web browsing with screen readers can be a frustrating experience mainly due to time and effort spent on locating the desired content through the extensive use of keyboard shortcuts. This gets even worse if users have limited shortcut vocabulary or are not familiar with the structure of a particular webpage. Augmenting screen readers with a speech input interface has the potential to alleviate the above limitations. This paper describes the design, implementation, and evaluation of Capti-Speak, a speech-enabled screen reader for web browsing, capable of translating speech utterances into browsing actions, executing the actions, and providing audio feedback. The novelty of Capti-Speak is that it leverages a custom dialog model, designed exclusively for non-visual web access, for interpreting speech utterances. A user study with 20 blind subjects showed that Capti-Speak was significantly more usable and efficient compared to the regular screen reader, especially for ad-hoc browsing, searching, and navigating to the content of interest.


international conference on information technology: new generations | 2015

Dataless Data Mining: Association Rules-Based Distributed Privacy-Preserving Data Mining

Vikas Ashok; K. Navuluri; A. Alhafdhi; Ravi Mukkamala

Today, the desire to mine data from varied sources to discover behaviors and patterns of entities such as customers, diseases, and environmental conditions is on the rise. At the same time, the resistance to share data is also on the raise due to the increase in governmental regulations and individuals desire to preserve privacy. In this paper, we employ association rule mining to preserve individual data privacy without overly compromising on the accuracy of the global data mining task. Here, we describe the proposed methodology and show that the proposed scheme is privacy preserving. The methodology is tested using three commonly available data sets. The results validate our claims regarding the accuracy of synthetic data in its ability to represent original data without compromising privacy.


Proceedings of the 12th Web for All Conference on | 2015

Look Ma, no ARIA: generic accessible interfaces for web widgets

Valentyn Melnyk; Vikas Ashok; Yury Puzis; Yevgen Borodin; Andrii Soviak; I. V. Ramakrishnan

Once simple and static, many web pages have now evolved into complex web applications. Hundreds of web development libraries provide ready-to-use custom widgets, which can be further customized to fit the needs of individual web applications. Web developers are supposed to use ARIA specifications to make widgets accessible to screen readers; however, ARIA markup is often used incorrectly and inconsistently, and sometimes even missing in webpages altogether. Given a wide selection of widgets and a lack of proper ARIA support, accessing content of custom widgets in web pages with screen readers has been a challenge for blind users. As a result, blind users cannot benefit from the convenience of using these widgets or, even worse, get stuck on inaccessible content. In our previous work, we showed that custom dynamic widgets could be automatically detected and classified as soon as they appear in web pages. In this paper, we propose to make such widgets accessible by providing generic interfaces for widgets of a particular class. We show how this can be accomplished on the example of Web Chat widget. To demonstrate the usability of the resulting chat interface, we report on the results of a user study with 18 blind screen-reader users.


2011 Fifth IEEE International Conference on Advanced Telecommunication Systems and Networks (ANTS) | 2011

A novel two-stage self correcting GPS-free localization algorithm for GSM mobiles

Thrivikrama; Vikas Ashok; A Srinivas

Extending emergency services like Enhanced 911 to a constantly increasing mobile population is becoming increasingly important for wireless providers. Localization service or position estimation with GPS in GSM based mobile phones is an expensive affair, both in terms of cost and energy consumption. Experiments have shown that GPS does not provide reliable location estimates of mobile terminals (MT) in indoor and dense urban environments. The principal contribution of this paper is the proposed novel algorithm that employs the existing GSM signal strength measurements as an alternative to GPS measurements, as the basis for position estimation in GSM based networks. The proposed algorithm is two-staged with a signal weight based positioning technique (SWBP) as the initial stage followed by a self-correction feedback mechanism (SCFM). In the SWBP, an initial position estimate is computed using the received signal strengths from the surrounding base stations. This initial estimate from SWBP is then fed into the SCFM to generate an accurate final location estimate based on previous history of location estimates. The second stage (SCFM) strives to negate the effects of erroneous signal strength measurements and abrupt changes in user direction. Simulation results demonstrate that the proposed algorithm yields better MT position estimates compared to conventional triangulation techniques.


international conference on universal access in human-computer interaction | 2017

Non-visual Web Browsing: Beyond Web Accessibility

I. V. Ramakrishnan; Vikas Ashok; Syed Masum Billah

People with vision impairments typically use screen readers to browse the Web. To facilitate non-visual browsing, web sites must be made accessible to screen readers, i.e., all the visible elements in the web site must be readable by the screen reader. But even if web sites are accessible, screen-reader users may not find them easy to use and/or easy to navigate. For example, they may not be able to locate the desired information without having to listen to a lot of irrelevant contents. These issues go beyond web accessibility and directly impact web usability. Several techniques have been reported in the accessibility literature for making the Web usable for screen reading. This paper is a review of these techniques. Interestingly, the review reveals that understanding the semantics of the web content is the overarching theme that drives these techniques for improving web usability.


human factors in computing systems | 2017

Ubiquitous Accessibility for People with Visual Impairments: Are We There Yet?

Syed Masum Billah; Vikas Ashok; Donald E. Porter; I. V. Ramakrishnan

Ubiquitous access is an increasingly common vision of computing, wherein users can interact with any computing device or service from anywhere, at any time. In the era of personal computing, users with visual impairments required special-purpose, assistive technologies, such as screen readers, to interact with computers. This paper investigates whether technologies like screen readers have kept pace with, or have created a barrier to, the trend toward ubiquitous access, with a specific focus on desktop computing as this is still the primary way computers are used in education and employment. Towards that, the paper presents a user study with 21 visually-impaired participants, specifically involving the switching of screen readers within and across different computing platforms, and the use of screen readers in remote access scenarios. Among the findings, the study shows that, even for remote desktop access - an early forerunner of true ubiquitous access - screen readers are too limited, if not unusable. The study also identifies several accessibility needs, such as uniformity of navigational experience across devices, and recommends potential solutions. In summary, assistive technologies have not made the jump into the era of ubiquitous access, and multiple, inconsistent screen readers create new practical problems for users with visual impairments.

Collaboration


Dive into the Vikas Ashok's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yury Puzis

Stony Brook University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Donald E. Porter

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yuri Puzis

Stony Brook University

View shared research outputs
Top Co-Authors

Avatar

A. Alhafdhi

Old Dominion University

View shared research outputs
Researchain Logo
Decentralizing Knowledge