Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Khairi Reda is active.

Publication


Featured researches published by Khairi Reda.


IEEE Computer Graphics and Applications | 2013

Visualizing Large, Heterogeneous Data in Hybrid-Reality Environments

Khairi Reda; Alessandro Febretti; Aaron Knoll; Jillian Aurisano; Jason Leigh; Andrew E. Johnson; Michael E. Papka; Mark Hereld

Constructing integrative visualizations that simultaneously cater to a variety of data types is challenging. Hybrid-reality environments blur the line between virtual environments and tiled display walls. They incorporate high-resolution, stereoscopic displays, which can be used to juxtapose large, heterogeneous datasets while providing a range of naturalistic interaction schemes. They thus empower designers to construct integrative visualizations that more effectively mash up 2D, 3D, temporal, and multivariate datasets.


ieee vgtc conference on visualization | 2011

Visualizing the evolution of community structures in dynamic social networks

Khairi Reda; Chayant Tantipathananandh; Andrew E. Johnson; Jason Leigh; Tanya Y. Berger-Wolf

Social network analysis is the study of patterns of interaction between social entities. The field is attracting increasing attention from diverse disciplines including sociology, epidemiology, and behavioral ecology. An important sociological phenomenon that draws the attention of analysts is the emergence of communities, which tend to form, evolve, and dissolve gradually over a period of time. Understanding this evolution is crucial to sociologists and domain scientists, and often leads to a better appreciation of the social system under study. Therefore, it is imperative that social network visualization tools support this task. While graph‐based representations are well suited for investigating structural properties of networks at a single point in time, they appear to be significantly less useful when used to analyze gradual structural changes over a period of time. In this paper, we present an interactive visualization methodology for dynamic social networks. Our technique focuses on revealing the community structure implied by the evolving interaction patterns between individuals. We apply our visualization to analyze the community structure in the US House of Representatives. We also report on a user study conducted with the participation of behavioral ecologists working with social network datasets that depict interactions between wild animals. Findings from the user study confirm that the visualization was helpful in providing answers to sociological questions as well as eliciting new observations on the social organization of the population under study.


2013 IEEE Symposium on Large-Scale Data Analysis and Visualization (LDAV) | 2013

Visualizing large-scale atomistic simulations in ultra-resolution immersive environments

Khairi Reda; Aaron Knoll; Ken Ichi Nomura; Michael E. Papka; Andrew E. Johnson; Jason Leigh

Molecular Dynamics is becoming a principle methodology in the study of nanoscale systems, paving the way for innovations in battery design and alternative fuel applications. With the increasing availability of computational power and advances in modeling, atomistic simulations are rapidly growing in scale and complexity. Despite the plethora of molecular visualization techniques, visualizing and exploring large-scale atomistic simulations remain difficult. Existing molecular representations are not perceptually scalable and often adopt a rigid definition of surfaces, making them inappropriate for nanostructured materials where boundaries are inherently ill-defined. In this paper, we present an application for the interactive visualization and exploration of large-scale atomistic simulations in ultra-resolution immersive environments. We employ a hybrid representation which combines solid ball-and-stick glyphs with volumetric surfaces to visually convey the uncertainty in molecular boundaries at the nanoscale. We also describe a scalable, distributed GPU ray-casting implementation capable of rendering complex atomistic simulations with millions of atoms in real-time.


workshop on beyond time and errors | 2014

Evaluating user behavior and strategy during visual exploration

Khairi Reda; Andrew E. Johnson; Jason Leigh; Michael E. Papka

Visualization practitioners have traditionally focused on evaluating the outcome of the visual analytic process, as opposed to studying how that process unfolds. Since user strategy would likely influence the outcome of visual analysis and the nature of insights acquired, it is important to understand how the analytic behavior of users is shaped by variations in the design of the visualization interface. This paper presents a technique for evaluating user behavior in exploratory visual analysis scenarios. We characterize visual exploration as a fluid activity involving transitions between mental and interaction states. We show how micro-patterns in these transitions can be captured and analyzed quantitatively to reveal differences in the exploratory behavior of users, given variations in the visualization interface.


eurographics | 2014

RBF Volume Ray Casting on Multicore and Manycore CPUs

Aaron Knoll; Ingo Wald; Paul A. Navrátil; Anne Bowen; Khairi Reda; Michael E. Papka; Kelly P. Gaither

Modern supercomputers enable increasingly large N‐body simulations using unstructured point data. The structures implied by these points can be reconstructed implicitly. Direct volume rendering of radial basis function (RBF) kernels in domain‐space offers flexible classification and robust feature reconstruction, but achieving performant RBF volume rendering remains a challenge for existing methods on both CPUs and accelerators. In this paper, we present a fast CPU method for direct volume rendering of particle data with RBF kernels. We propose a novel two‐pass algorithm: first sampling the RBF field using coherent bounding hierarchy traversal, then subsequently integrating samples along ray segments. Our approach performs interactively for a range of data sets from molecular dynamics and astrophysics up to 82 million particles. It does not rely on level of detail or subsampling, and offers better reconstruction quality than structured volume rendering of the same data, exhibiting comparable performance and requiring no additional preprocessing or memory footprint other than the BVH. Lastly, our technique enables multi‐field, multi‐material classification of particle data, providing better insight and analysis.


BMC Bioinformatics | 2015

BactoGeNIE: a large-scale comparative genome visualization for big displays

Jillian Aurisano; Khairi Reda; Andrew E. Johnson; Elisabeta G. Marai; Jason Leigh

BackgroundThe volume of complete bacterial genome sequence data available to comparative genomics researchers is rapidly increasing. However, visualizations in comparative genomics--which aim to enable analysis tasks across collections of genomes--suffer from visual scalability issues. While large, multi-tiled and high-resolution displays have the potential to address scalability issues, new approaches are needed to take advantage of such environments, in order to enable the effective visual analysis of large genomics datasets.ResultsIn this paper, we present Bacterial Gene Neighborhood Investigation Environment, or BactoGeNIE, a novel and visually scalable design for comparative gene neighborhood analysis on large display environments. We evaluate BactoGeNIE through a case study on close to 700 draft Escherichia coli genomes, and present lessons learned from our design process.ConclusionsBactoGeNIE accommodates comparative tasks over substantially larger collections of neighborhoods than existing tools and explicitly addresses visual scalability. Given current trends in data generation, scalable designs of this type may inform visualization design for large-scale comparative research problems in genomics.


ieee international conference on high performance computing data and analytics | 2012

Scalable Visual Queries for Data Exploration on Large, High-Resolution 3D Displays

Khairi Reda; Andrew E. Johnson; Victor A. Mateevitsi; Catherine Offord; Jason Leigh

As the scale and complexity of data continue to grow at unprecedented rates, scientists are increasingly relying on Large, High-Resolution Displays to visualize and analyze scientific datasets. Recent studies have demonstrated the effectiveness of these displays in supporting cognitively demanding data analysis and sensemaking tasks. While there has been an abundance of research on rendering algorithms for large, high-resolution displays, far less effort has gone into designing interactive visual analytic interfaces to effectively leverage these displays in visual exploration and sensemaking scenarios involving large collections of data. In this paper, we present an interactive visual analytics application for the exploration of large trajectory datasets. Our application utilizes large, high-resolution 3D display environments to simultaneously visualize and juxtapose a large number of trajectories. It also integrates a scalable visual query technique, which can be used to quickly formulate and verify hypotheses, encouraging scientists to contemplate multiple competing theories before drawing conclusions. We evaluate our design within the context of a behavioral ecology case study. We also share our observations from a pilot user study to provide insights on how scientists might utilize large display environments in visual exploration and sensemaking scenarios.


Future Generation Computer Systems | 2015

Multiuser-centered resource scheduling for collaborative display wall environments

Sungwon Nam; Khairi Reda; Luc Renambot; Andrew E. Johnson; Jason Leigh

The popularity of large-scale, high-resolution display walls, as visualization endpoints in eScience infrastructure, is rapidly growing. These displays can be connected to distributed computing resources over high-speed network, providing effective means for researchers to visualize, interact with, and understand large volumes of datasets. Typically large display walls are built by tiling multiple physical displays together and running a tiled display wall required a cluster of computers. With the advent of advanced graphics hardware, a single computer can now drive over a dozen displays, thereby greatly reducing the cost of ownership and maintenance of a tiled display wall system. This in turn enables a broader user base to take advantage of such technologies. Since tiled display walls are also well suited to collaborative work, users tend to launch and operate multiple applications simultaneously. To ensure that applications maintain a high degree of responsiveness to the users even under heavy use loads, the display wall must now ensure that the limited system resources are prioritized to maximize interactivity rather than thread-level fair sharing or overall job-completion throughput. In this paper, we present a new resource scheduling scheme that is specifically designed to prioritize responsiveness in collaborative large display wall environments where multiple users can interact with multiple applications simultaneously. We evaluate our scheduling scheme with a user study involving groups of users interacting simultaneously on a tiled display wall with multiple applications. Results show that our scheduling framework provided a higher frame-rate for applications, which led to a significantly higher user performance (approx. 25%) in a target acquisition test when compared against traditional operating system scheduling scheme. We present a model that prioritizes applications based on how they are presented.We propose a resource scheduling scheme that achieves presentation fairness.User study evaluates the proposed scheduler in a multiuser collaborative session.


Information Visualization | 2016

Modeling and evaluating user behavior in exploratory visual analysis

Khairi Reda; Andrew E. Johnson; Michael E. Papka; Jason Leigh

Empirical evaluation methods for visualizations have traditionally focused on assessing the outcome of the visual analytic process as opposed to characterizing how that process unfolds. There are only a handful of methods that can be used to systematically study how people use visualizations, making it difficult for researchers to capture and characterize the subtlety of cognitive and interaction behaviors users exhibit during visual analysis. To validate and improve visualization design, it is important for researchers to be able to assess and understand how users interact with visualization systems under realistic scenarios. This article presents a methodology for modeling and evaluating the behavior of users in exploratory visual analysis. We model visual exploration using a Markov chain process comprising transitions between mental, interaction, and computational states. These states and the transitions between them can be deduced from a variety of sources, including verbal transcripts, videos and audio recordings, and log files. This model enables the evaluator to characterize the cognitive and computational processes that are essential to insight acquisition in exploratory visual analysis and reconstruct the dynamics of interaction between the user and the visualization system. We illustrate this model with two exemplar user studies, and demonstrate the qualitative and quantitative analytical tools it affords.


Eurasip Journal on Image and Video Processing | 2013

A human-computer collaborative workflow for the acquisition and analysis of terrestrial insect movement in behavioral field studies

Khairi Reda; Victor A. Mateevitsi; Catherine Offord

The study of insect behavior from video sequences poses many challenges. Despite the advances in image processing techniques, the current generation of insect tracking tools is only effective in controlled lab environments and under ideal lighting conditions. Very few tools are capable of tracking insects in outdoor environments where the insects normally operate. Furthermore, the majority of tools focus on the first stage of the analysis workflow, namely the acquisition of movement trajectories from video sequences. Far less effort has gone into developing specialized techniques to characterize insect movement patterns once acquired from videos. In this paper, we present a human-computer collaborative workflow for the acquisition and analysis of insect behavior from field-recorded videos. We employ a human-guided video processing method to identify and track insects from noisy videos with dynamic lighting conditions and unpredictable visual scenes, improving tracking precision by 20% to 44% compared to traditional automated methods. The workflow also incorporates a novel visualization tool for the large-scale exploratory analysis of insect trajectories. We also provide a number of quantitative methods for statistical hypothesis testing. Together, the various components of the workflow provide end-to-end quantitative and qualitative methods for the study of insect behavior from field-recorded videos. We demonstrate the effectiveness of the proposed workflow with a field study on the navigational strategies of Kenyan seed harvester ants.

Collaboration


Dive into the Khairi Reda's collaboration.

Top Co-Authors

Avatar

Jason Leigh

University of Hawaii at Manoa

View shared research outputs
Top Co-Authors

Avatar

Andrew E. Johnson

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar

Michael E. Papka

Northern Illinois University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Victor A. Mateevitsi

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jillian Aurisano

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar

Alberto Gonzalez

University of Hawaii at Manoa

View shared research outputs
Top Co-Authors

Avatar

Alessandro Febretti

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar

Anne Bowen

University of Texas at Austin

View shared research outputs
Researchain Logo
Decentralizing Knowledge