Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ifeoma Nwogu is active.

Publication


Featured researches published by Ifeoma Nwogu.


Journal of Machine Learning Research | 2013

Language-motivated approaches to action recognition

Manavender R. Malgireddy; Ifeoma Nwogu; Venu Govindaraju

We present language-motivated approaches to detecting, localizing and classifying activities and gestures in videos. In order to obtain statistical insight into the underlying patterns of motions in activities, we develop a dynamic, hierarchical Bayesian model which connects low-level visual features in videos with poses, motion patterns and classes of activities. This process is somewhat analogous to the method of detecting topics or categories from documents based on the word content of the documents, except that our documents are dynamic. The proposed generative model harnesses both the temporal ordering power of dynamic Bayesian networks such as hidden Markov models (HMMs) and the automatic clustering power of hierarchical Bayesian models such as the latent Dirichlet allocation (LDA) model. We also introduce a probabilistic framework for detecting and localizing pre-specified activities (or gestures) in a video sequence, analogous to the use of filler models for keyword detection in speech processing. We demonstrate the robustness of our classification model and our spotting framework by recognizing activities in unconstrained real-life video sequences and by spotting gestures via a one-shot-learning approach.


Face and Gesture 2011 | 2011

Lie to Me: Deceit detection via online behavioral learning

Nisha Bhaskaran; Ifeoma Nwogu; Mark G. Frank; Venu Govindaraju

Inspired by the the behavioral scientific discoveries of Dr. Paul Ekman in relation to deceit detection, along with the television drama series Lie to Me, also based on Dr. Ekmans work, we use machine learning techniques to study the underlying phenomena expressed when a person tells a lie. We build an automated framework which detects deceit by measuring the deviation from normal behavior, at a critical point in the course of an investigative interrogation. Behavioral psychologists have shown that the eyes (via either gaze aversion or gaze extension) can be good “reflectors” of the inner emotions, when a person tells a high-stake lie. Hence we develop our deceit detection framework around eye movement changes. A dynamic bayesian model of eye movements is trained during a normal course of conversation for each subject, to represent normal behavior. The remaining conversation is broken into sequences and each sequence is tested against the parameters of the model of normal behavior. At the critical points in the interrogations, the deviations from normalcy are observed and used to deduce verity/deceit. An analysis on 40 subjects gave an accuracy of 82.5% which strongly suggests that the latent parameters of eye movements successfully capture behavioral changes and could be viable for use in automated deceit detection.


computer vision and pattern recognition | 2008

(BP) 2 : Beyond pairwise Belief Propagation labeling by approximating Kikuchi free energies

Ifeoma Nwogu; Jason J. Corso

Belief propagation (BP) can be very useful and efficient for performing approximate inference on graphs. But when the graph is very highly connected with strong conflicting interactions, BP tends to fail to converge. Generalized Belief Propagation (GBP) provides more accurate solutions on such graphs, by approximating Kikuchi free energies, but the clusters required for the Kikuchi approximations are hard to generate. We propose a new algorithmic way of generating such clusters from a graph without exponentially increasing the size of the graph during triangulation. In order to perform the statistical region labeling, we introduce the use of superpixels for the nodes of the graph, as it is a more natural representation of an image than the pixel grid. This results in a smaller but much more highly interconnected graph where BP consistently fails. We demonstrate how our version of the GBP algorithm outperforms BP on synthetic and natural images and in both cases, GBP converges after only a few iterations.


international conference on computer vision | 2011

A generative framework to investigate the underlying patterns in human activities

Manavender R. Malgireddy; Ifeoma Nwogu; Venu Govindaraju

We propose a novel generative learning framework for activity categorization. In order obtain statistical insight into the underlying patterns of motions in activities, we propose a supervised dynamic, hierarchical Bayesian model which connects low-level visual features in videos with poses, motion patterns and classes of activity. Our proposed generative model harnesses both the temporal ordering power of dynamic Bayesian networks such as Hidden Markov Models (HMMs) and the automatic clustering power of hierarchical Bayesian models such as the Latent Dirichlet Allocation (LDA) model. We demonstrate the strength of this model by profiling different activities in scenes of varying complexities, by clustering visual events into poses which in turn are clustered into motion patterns. The model also correlates these motion patterns over time in order to define the signatures for classes of activities. We test our model on several publicly available datasets and achieve high accuracy rates.


computer vision and pattern recognition | 2010

Syntactic image parsing using ontology and semantic descriptions

Ifeoma Nwogu; Venu Govindaraju; Christopher M. Brown

We present an ontology-guided, symbol-based, image parser which involves the use of semantic, spoken language descriptions of entities in images as well as the real-world spatial relationships defined between these entities. Our parsing approach explicitly describes objects and the relationships between them with linguistically meaningful modes of colors, textures and [coarse] expressions of shapes. The image parser is built on a syntactic image grammar-based framework and performs a (near) global optimization using superpixels as an initial set of subpatterns. It hypothesizes the entities in images using their local semantic attributes and verifies them globally using their more global features and their relative spatial locations,. Evaluations of the parser are performed on selected images which we make publicly available along with their manual segmentations and our labeling results.


Proceedings of SPIE | 2010

An automated process for deceit detection

Ifeoma Nwogu; Mark G. Frank; Venu Govindaraju

In this paper we present a prototype for an automated deception detection system. Similar to polygraph examinations, we attempt to take advantage of the theory that false answers will produce distinctive measurements in certain physiological manifestations. We investigate the role of dynamic eye-based features such as eye closure/blinking and lateral movements of the iris in detecting deceit. The features are recorded both when the test subjects are having non-threatening conversations as well as when they are being interrogated about a crime they might have committed. The rates of the behavioral changes are blindly clustered into two groups. Examining the clusters and their characteristics, we observe that the dynamic features selected for deception detection show promising results with an overall deceptive/non-deceptive prediction rate of 71.43% from a study consisting of 28 subjects.


international workshop on combinatorial image analysis | 2011

A shared parameter model for gesture and sub-gesture analysis

Manavender R. Malgireddy; Ifeoma Nwogu; Subarna Ghosh; Venu Govindaraju

Gesture sequences typically have a common set of distinct internal sub-structures which can be shared across the gestures. In this paper, we propose a method using a generative model to learn these common actions which we refer to as sub-gestures, and in-turn perform recognition. Our proposed model learns sub-gestures by sharing parameters between gesture models. We evaluated our method on the Palm Graffiti digits-gesture dataset and showed that the model with shared parameters outperformed the same model without the shared parameters. Also, we labeled different observation sequences thereby intuitively showing how sub-gestures are related to complete gestures.


Disability and Rehabilitation: Assistive Technology | 2018

Reported use of technology in stroke rehabilitation by physical and occupational therapists.

Jeanne Langan; Heamchand Subryan; Ifeoma Nwogu; Lora A. Cavuoto

Abstract Purpose: With the patient care experience being a healthcare priority, it is concerning that patients with stroke reported boredom and a desire for greater fostering of autonomy, when evaluating their rehabilitation experience. Technology has the potential to reduce these shortcomings by engaging patients through entertainment and objective feedback. Providing objective feedback has resulted in improved outcomes and may assist the patient in learning how to self-manage rehabilitation. Our goal was to examine the extent to which physical and occupational therapists use technology in clinical stroke rehabilitation home exercise programs. Materials and methods: Surveys were sent via mail, email and online postings to over 500 therapists, 107 responded. Results: Conventional equipment such as stopwatches are more frequently used compared to newer technology like Wii and Kinect games. Still, less than 25% of therapists’ report using a stopwatch five or more times per week. Notably, feedback to patients is based upon objective data less than 50% of the time by most therapists. At the end of clinical rehabilitation, patients typically receive a written home exercise program and non-technological equipment, like theraband and/or theraputty to continue rehabilitation efforts independently. Conclusions: The use of technology is not pervasive in the continuum of stroke rehabilitation. Implications for Rehabilitation The patient care experience is a priority in healthcare, so when patients report feeling bored and desiring greater fostering of autonomy in stroke rehabilitation, it is troubling. Research examining the use of technology has shown positive results for improving motor performance and engaging patients through entertainment and use of objective feedback. Physical and occupational therapists do not widely use technology in stroke rehabilitation. Therapists should consider using technology in stroke rehabilitation to better meet the needs of the patient.


International Journal of Central Banking | 2014

Use of language as a cognitive biometric trait

Neeti Pokhriyal; Ifeoma Nwogu; Venugopal Govindaraju

This paper investigates whether the cognitive state of a person can be learnt and used as a novel biometric trait. We explore the idea of using language written by an author, as his/her cognitive fingerprint. The dataset consists of millions of blogs written by thousands of authors on the Internet. Our proposed method learns a classifier that can distinguish between genuine and impostor authors. Our results are encouraging (we report 72% Area under the ROC curve) and show that users do have a distinctive linguistic style, which is evident even when analyzing a corpora as large and diverse as the Internet. When we tested on new authors that the system had never encountered before, our methodology correctly identified genuine authors with 78% accuracy and impostors with 76% accuracy.


international conference on document analysis and recognition | 2015

Automated analysis of line plots in documents

Rathin Radhakrishnan Nair; Nishant Sankaran; Ifeoma Nwogu; Venu Govindaraju

Information graphics, such as graphs and plots, are used in technical documents to convey information to humans and to facilitate greater understanding. Usually, graphics are a key component in a technical document, as they enable the author to convey complex ideas in a simplified visual format. However, in an automatic text recognition system, which are typically used to digitize documents, the ideas conveyed in a graphical format are lost. We contend that the message or extracted information can be used to help better understand the ideas conveyed in the document. In scientific papers, line plots are the most commonly used graphic to represent experimental results in the form of correlation present between values represented on the axes. The contribution of our work is in the series of image processing algorithms that are used to automatically extract relevant information, including text and plot from graphics found in technical documents. We validate the approach by performing the experiments on a dataset of line plots obtained from scientific documents from computer science conference papers and evaluate the variation of a reconstructed curve from the original curve. Our algorithm achieves a classification accuracy of 91% across the dataset and successfully extracts the axes from 92% of line plots. Axes label extraction and line curve tracing are performed successfully in about half the line plots as well.

Collaboration


Dive into the Ifeoma Nwogu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark G. Frank

State University of New York System

View shared research outputs
Top Co-Authors

Avatar

Neeti Pokhriyal

State University of New York System

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge