Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Azita Bahrami is active.

Publication


Featured researches published by Azita Bahrami.


international conference on information technology coding and computing | 2003

Development of group's signature for evaluation of skin cancer in mice caused by ultraviolet radiation

Ray R. Hashemi; Mahmood Bahar; Alexander A. Tyler; Azita Bahrami; Nan Tang; William G. Hinson

In this research effort, the effect of UVC (260 nm) on the skin of one month old Balb/c mice exposed for a total of 100 hours is studied. The goal is to identify those independent variables in the experimental group that have a significant change in their measurements in compare to the measurements of their counterparts in the control group. To meet the goal, we create signatures for both experimental and control groups using the Kohonen self-organizing map. The comparison of signatures to each other delivers the significant changes in the independent variables between the two groups. The findings are compared with another set of findings obtained from using analysis of variance. The results reveal that using signature approach that is created based on the SOM methodology, is a viable tool for this type of analysis.


international conference on information technology: new generations | 2012

Predicting Future Climate Using Algae Sedimentation

Jasdeep Natt; Ray R. Hashemi; Azita Bahrami; Mahmood Bahar; Nicholas Tyler; Jay Y. S. Hodgson

Biologists have shown that algae are the first to be implicated in climate changes and vice versa. The goal of this research effort is to predict the future climate using algae species living in a lake in the past. On performing age depth profile analysis on the sediment core obtained from the bottom of Jewel Lake in Alaska, a dataset of 163 records is derived. The 163 records collectively represent 4,308 years interval. Each record is composed of 86 attributes: Year, 84 species of algae, and Climate. The relevancy analysis of the attributes of the dataset identified only four species relevant to climate. An extrapolation systems Architecture with two stages of prediction process fit to meet the goal of this research effort. Three different predictive models of regression analysis, neural network, and ID3 were investigated for possible use in development of the extrapolation system resulting in four possible ways among which the best one uses neural networks in both stages. The best extrapolation system was able to predict climate for a year in future, within the next 278 years, with the average accuracy of 80%.


international conference on information technology: new generations | 2011

Identification of Core, Semi-Core and Redundant Attributes of a Dataset

Ray R. Hashemi; Azita Bahrami; Mark Smith; Simon Young

Data reduction is an essential step in pre-processing of a dataset and it is necessary for improving data quality and obtaining the relevant data from the dataset. Data reduction is performed by identifying and removing redundant attributes of the dataset. However, every non-redundant attribute does not have the same level of contribution to the decision (dependent variable). Therefore, the non-redundant attributes may be further divided into two sub-categories of core (attributes that totally contribute to the decision) and semi-core (attributes that partially contribute to the decision) attributes. In this paper, a methodology for separating core, semi-core, and redundant attributes is introduced and tested. The result shows that the proposed methodology has a high potential for use in any generalization process.


International Journal of Intelligent Information Technologies | 2009

Association Analysis of Alumni Giving: A Formal Concept Analysis

Ray R. Hashemi; Louis A. Le Blanc; Azita Bahrami; Mahmood Bahar; Bryan Traywick

A large sample (initially 33,000 cases representing a ten percent trial) of university alumni giving records for a large public university in the southwestern United States is analyzed by Formal Concept Analysis. This likely represents the initial attempt to perform analysis of such data by means of a machine learning technique. The variables employed include the gift amount to the university foundation as well as traditional demographic variables such as year of graduation, gender, ethnicity, marital status, etc. The foundation serves as one of the institution’s non-profit, fund-raising organizations. It pursues substantial gifts that are designated for the educational or leadership programs of the giver’s choice. Although they process gifts of all sizes, the foundation’s focus is on major gifts and endowments. Association Analysis of the given dataset is a two-step process. In the first step, FCA is applied to identify concepts and their relationships and in the second step, the association rules are defined for each concept. The hypothesis examined in this paper is that the generosity of alumni toward his/her alma mater can be predicted using association rules obtained by applying the Formal Concept Analysis approach.


international conference on information technology | 2007

An L-Tree Based Analysis of E-lessons

Azita Bahrami

Logical structure of an e-lesson may be viewed from three different perspectives of absolute, teacher and learner, which may be represented as A-tree, T-tree, and L-tree, respectively. Building an A-tree is extremely difficult, if not impossible. A T-tree can be built by a teacher for a class. An L-tree can be built by a learner with guidance of a teacher. In this paper, the inherent properties of A and L trees are utilized to answer some crucial questions any e-lesson developer encounters. These questions are: (a) how are needed new modules for an e-lesson identified and used, (b) how are redundant modules in an e-lesson identified and discarded, (c) how and when should an e-lesson be broken into two or more e-lessons, and (d) How is the level of preparedness of two groups of learners for receiving a new e-lesson compared


Computational Intelligence in Biomedicine and Bioinformatics | 2008

The Use of Rough Sets as a Data Mining Tool for Experimental Bio-data

Ray R. Hashemi; Alexander A. Tyler; Azita Bahrami

The Rough Sets methodology has great potential for mining experimental data. Since its introduction by Pawlak, it has received a lot of attention in the computing community. However, due to the mathematical nature of the Rough Sets methodology, many experimental scientists lacking sufficient mathematical background have been hesitant to use it. The goal of this chapter is twofold: (1) to introduce “Rough Sets” methodology (along with one of its derivatives, “Modified Rough Sets”) in a non-mathematical fashion hoping to share the potentials of this approach with a larger group of non-computationally-oriented scientists (Mining of one specific form of implicit data within a bio-dataset is also discussed), and (2) to apply this methodology to a dataset of children with and without Attention Deficit/Hyperactivity Disorder (ADHD), to demonstrate the usefulness of the approach in patient differentiation. Discriminant Analysis statistical approach as well as the ID3 approach were also applied to the same dataset for comparison purposes to find out which approach is most effective.


Emerging Trends in Computational Biology, Bioinformatics, and Systems Biology | 2015

Chapter 15 – Region Growing in Nonpictorial Data for Organ-Specific Toxicity Prediction

Ray R. Hashemi; Azita Bahrami; Mahmood Bahar; Nicholas R. Tyler; Daniel Swain

Region growing is a well-known concept in image processing that, among other things, effectively contributes to mining of pictorial data. The goal of this research effort is to (1) investigate region growing in nonpictorial data and (2) determine the effectiveness of the regions in mining of such data. Part (1) is met by introducing a new version of the self-organizing map (SOM), Neighborly SOM , capable of delivering such regions. Part (2) is met by introducing a new prediction methodology using the delivered regions and measuring its effectiveness by (a) applying the method to 10 pairs of training and test sets [repeated random sub-sampling (RRSS) cross-validation] predicting the chemical agents’ liver toxicity and (b) comparing the liver toxicity prediction accuracy with the predictions produced by C4.5, and the traditional SOM using leave-one-out (LOO) and RRSS cross-validations. The results revealed that the proposed methodology has a better performance.


international conference on conceptual structures | 2011

An android based medication reminder system: a concept analysis approach

Ray R. Hashemi; Leslie Sears; Azita Bahrami

Failure to take medication as prescribed is one of the leading issues in heath care today. A class of applications designed to remind people to take their medication as prescribed. While there are quite a few medication reminder systems available, they all require the user to enter the data manually. To simplify the use of these applications, development of a system is presented in this paper that allows the user to take a picture of his/her prescription medication labels and have reminders automatically generated for them. The system is developed for Android OS powered mobile devices and it employs image processing and a concept analysis approach. The accuracy for parsing dosing instruction text from images of medication labels and creating reminder events is over 90%.


international conference on information technology: new generations | 2010

A Framework for Normalization of Homogeneous and Semi-homogeneous E-Lessons

Azita Bahrami

An e-lesson can be of Homogeneous or Semi-homogeneous nature. Even if a subset of such e-lessons has only one module (m1) in common with other subsets, the “common module” may not contain the same array of concepts. As a result, any investigation of such e-lessons based solely on their module names is inherently flawed. Normalization of e-lessons can remove such a flaw. A set of N Homogeneous and Semi-homogenous e-lessons are considered normalized if same-name modules cover the same concepts. In this paper a framework for normalization of e-lessons is presented as the first phase of creating a mechanism for accurate e-lesson modules’ cross-referencing, indexing, retrieving, and data mining.


Archive | 2009

Mining E-Documents to Uncover Structures

Azita Bahrami

An e-Document, D, coded in HTML is comprised of a body and a head. The body includes the contents of the e-document, and the head includes, among other things, metadata.

Collaboration


Dive into the Azita Bahrami's collaboration.

Top Co-Authors

Avatar

Ray R. Hashemi

Armstrong State University

View shared research outputs
Top Co-Authors

Avatar

Nicholas Tyler

Armstrong State University

View shared research outputs
Top Co-Authors

Avatar

Matthew Antonelli

Armstrong State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bryan Traywick

Armstrong State University

View shared research outputs
Top Co-Authors

Avatar

Daniel Swain

Armstrong State University

View shared research outputs
Top Co-Authors

Avatar

Jasdeep Natt

Armstrong State University

View shared research outputs
Top Co-Authors

Avatar

Jay Y. S. Hodgson

Armstrong State University

View shared research outputs
Top Co-Authors

Avatar

Leslie Sears

Armstrong State University

View shared research outputs
Top Co-Authors

Avatar

Louis A. Le Blanc

University of Arkansas at Little Rock

View shared research outputs
Researchain Logo
Decentralizing Knowledge