Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shamim Ripon is active.

Publication


Featured researches published by Shamim Ripon.


web services and formal methods | 2005

Executable semantics for compensating CSP

Michael Butler; Shamim Ripon

Compensation is an error recovery mechanism for long-running transactions. Compensating CSP is a variant of the CSP process algebra with constructs for orchestration of compensations. We present a simple operational semantics for Compensating CSP and outline an encoding of this semantics in Prolog. This provides a basis for implementation and model checking of the language.


Computer Methods and Programs in Biomedicine | 2016

A MapReduce approach to diminish imbalance parameters for big deoxyribonucleic acid dataset

Sarwar Kamal; Shamim Ripon; Nilanjan Dey; Amira S. Ashour; V. Santhi

BACKGROUND In the age of information superhighway, big data play a significant role in information processing, extractions, retrieving and management. In computational biology, the continuous challenge is to manage the biological data. Data mining techniques are sometimes imperfect for new space and time requirements. Thus, it is critical to process massive amounts of data to retrieve knowledge. The existing software and automated tools to handle big data sets are not sufficient. As a result, an expandable mining technique that enfolds the large storage and processing capability of distributed or parallel processing platforms is essential. METHOD In this analysis, a contemporary distributed clustering methodology for imbalance data reduction using k-nearest neighbor (K-NN) classification approach has been introduced. The pivotal objective of this work is to illustrate real training data sets with reduced amount of elements or instances. These reduced amounts of data sets will ensure faster data classification and standard storage management with less sensitivity. However, general data reduction methods cannot manage very big data sets. To minimize these difficulties, a MapReduce-oriented framework is designed using various clusters of automated contents, comprising multiple algorithmic approaches. RESULTS To test the proposed approach, a real DNA (deoxyribonucleic acid) dataset that consists of 90 million pairs has been used. The proposed model reduces the imbalance data sets from large-scale data sets without loss of its accuracy. CONCLUSIONS The obtained results depict that MapReduce based K-NN classifier provided accurate results for big data of DNA.


International Journal of Rough Sets and Data Analysis archive | 2016

Theoretical Analysis of Different Classifiers under Reduction Rough Data Set: A Brief Proposal

Shamim Ripon; Sarwar Kamal; Saddam Hossain; Nilanjan Dey

Rough set plays vital role to overcome the complexities, vagueness, uncertainty, imprecision, and incomplete data during features analysis. Classification is tested on certain dataset that maintain an exact class and review process where key attributes decide the class positions. To assess efficient and automated learning, algorithms are used over training datasets. Generally, classification is supervised learning whereas clustering is unsupervised. Classifications under mathematical models deal with mining rules and machine learning. The Objective of this work is to establish a strong theoretical and manual analysis among three popular classifier namely K-nearest neighbor K-NN, Naive Bayes and Apriori algorithm. Hybridization with rough sets among these three classifiers enables enable to address larger datasets. Performances of three classifiers have tested in absence and presence of rough sets. This work is in the phase of implementation for DNA Deoxyribonucleic Acid datasets and it will design automated system to assess classifier under machine learning environment.


Neural Computing and Applications | 2018

Evolutionary framework for coding area selection from cancer data

Sarwar Kamal; Nilanjan Dey; Sonia Farhana Nimmy; Shamim Ripon; Nawab Yousuf Ali; Amira S. Ashour; Wahiba Ben Abdessalem Karaa; Gia Nhu Nguyen; Fuqian Shi

AbstractCancer data analysis is significant to detect the codes that are responsible for cancer diseases. It is significant to find out the coding regions from diseases infected biological data. The infected data will be helpful to design proper drugs and will be supportable in laboratory assessments. Codes bear specific meaning on various features as well as symptoms of diseases. Coding of biological data is a key area to get exact information on animals to discover the desired medicine. In the current work, four different machine learning approaches such as support vector machine (SVM), principal component analysis (PCA) technique, neural mapping skyline filtering (NMSF) and Fisher’s discriminant analysis (FDA) were applied for data reduction and coding area selection. The experimental analysis established that the SVM outperforms PCA and FDA. However, due to the mapping facility, NMSF outperforms SVM. Thus, the NMSF achieved the preeminent results among the four techniques. Matthews’s correlation coefficient was used to evaluate the accuracy, specificity, sensitivity, F-measures and error rate of the four methods that are used to determine the coding area. Detailed experimental analysis included comparison study among the four classifiers for the deoxyribonucleic acid dataset.


International Journal of Future Computer and Communication | 2013

Elicitation and Modeling Non-Functional Requirements - A POS Case Study

Md. Mijanur Rahman; Shamim Ripon

Proper management of requirements is crucial to successful development software within limited time and cost. Nonfunctional requirements (NFR) are one of the key criteria to derive a comparison among various software systems. In most of software development NFR have be specified as an additional requirement of software. NFRs such as performance, reliability, maintainability, security, accuracy etc. have to be considered at the early stage of software development as functional requirement (FR). However, identifying NFR is not an easy task. Although there are well developed techniques for eliciting functional requirement, there is a lack of elicitation mechanism for NFR and there is no proper consensus regarding NFR elicitation techniques. Eliciting NFRs are considered to be one of the challenging jobs in requirement analysis. This paper proposes a UML use case based questionary approach to identifying and classifying NFR of a system. The proposed approach is illustrated by using a Point of Sale (POS) case study


Neural Network World | 2017

FbMapping: An Automated System for Monitoring Facebook

Sarwar Kamal; Nilanjan Dey; Amira S. Ashour; Shamim Ripon; Valentina E. Balas; Mohammad Shibli Kaysar

In recent modernized era, the number of the Facebook users is increasing dramatically. Moreover, the daily life information on social networking sites is changing energetically over web. Teenagers and university students are the major users for the different social networks all over the world. In order to maintain rapid user satisfactions, information flow and clustering are essential. However, these tasks are very challenging due to the excessive datasets. In this context, cleaning the original data is significant. Thus, in the current work the Fishers Discrimination Criterion (FDC) is applied to clean the raw datasets. The FDC separates the datasets for superior fit under least square sense. It arranges datasets by combining linearly with greater ratios of between – groups and within the groups. In the proposed approach, the separated data are handled by the Bigtable mapping that is constructed with Map specification, tabular representation and aggregation. The first phase organizes the cleaned datasets in row, column and timestamps. In the tabular representation, Sorted String Table (SSTable) ensures the exact mapping. Aggregation phase is employed to find out the similarity among the extracted datasets. Mapping, preprocessing and aggregation help to monitor information flow and communication over Facebook. For smooth and continuous monitoring, the Dynamic Source Monitoring (DSM) scheme is applied. Adequate experimental comparisons and synthesis are performed with mapping the Facebook datasets. The results prove the efficiency of the proposed machine learning approaches for the Facebook datasets monitoring.


ACM Sigsoft Software Engineering Notes | 2012

A unified tabular method for modeling variants of software product line

Shamim Ripon

Reuse of software is a promising approach to improving the efficiency of software development regarding time, cost and quality. Reuse requires a systematic approach. The best results are achieved if we focus on systems in a specific domain, so-called product line. The key difference between the conventional software engineering and software product line engineering is variant management. The main idea of software product line is to identify the common core functionality which can be implemented once and reused afterwards for all members of the product line. To facilitate this reuse opportunity the domain engineering phase makes the domain model comprising the common as well as variant requirements. In principle, common requirements among systems in a family are easy to handle. However, problem arises during handling variants. Different variants have dependencies on each other; a single variant can affect several variants of the domain model. These problems become complex when the volume of information grows in a domain and there are a lot of variants with several interdependencies. Hence, a separate model is required for handling the variants. This paper presents a mechanism, which we call, Unified Tabular Method to facilitate the management of variant dependencies in product lines. The tabular method consists of a variant part to model the variants and their dependencies, and a decision table to depict the customization decision regarding each variant while deriving customized products. Tabular method alleviates the problem of possible explosion of variant combinations and facilitates the tracing of variant information in the domain model


software product lines | 2012

Modeling and analysis of product-line variants

Shamim Ripon; Keya Azad; Sk. Jahir Hossain; Mehidee Hassan

Formal verification of variant requirements has gained much interest in the software product line (SPL) community. Feature diagrams are widely used to model product line variants. However, there is a lack of precisely defined formal notation for representing and verifying such models. This paper presents an approach to modeling and analyzing SPL variant feature diagrams using first-order logic. It provides a precise and rigorous formal interpretation of the feature diagrams. Logical expressions can be built by modeling variants and their dependencies by using propositional connectives. These expressions can then be validated by any suitable verification tool such as Alloy. A case study of a Computer Aided Dispatch (CAD) system variant feature model is presented to illustrate the analysis and verification process.


Electronic Notes in Theoretical Computer Science | 2009

PVS Embedding of cCSP Semantic Models and Their Relationship

Shamim Ripon; Michael Butler

This paper demonstrates an embedding of the semantic models of the cCSP process algebra in the general purpose theorem prover PVS. cCSP is a language designed to model long-running business transactions with constructs for orchestration of compensations. The cCSP process algebra terms are defined in PVS by using mutually recursive datatype. The trace and the operational semantics of the algebra are embedded in PVS. We show how these semantic embeddings are used to define and prove a relationship between the semantic models by using the powerful induction mechanism of PVS.


international conference on advanced computer science applications and technologies | 2012

Web Service Composition -- BPEL vs cCSP Process Algebra

Shamim Ripon; Mohammad Salah Uddin; Aoyan Barua

Web services technology provides a platform on which we can develop distributed services. The interoperability among these services is achieved by various standard protocols. In recent years, several researches suggested that process algebras provide a satisfactory assistance to the whole process of web services development. Business transactions, on the other hand, involve the coordination and interaction between multiple partners. With the emergence of web services, business transactions are conducted using these services. The coordination among the business processes is crucial, so is the handling of faults that can arise at any stage of a transaction. BPEL models the behavior of business process interaction by providing a XML based grammar to describe the control logic required to coordinate the web services participating in a process flow. However BPEL lacks a proper formal description where the composition of business processes cannot be formally verified. Process algebra, on the other hand, facilitates a formal foundation for rigorous verification of the composition. This paper presents a comparison of web service composition between BPEL and process algebra, cCSP.

Collaboration


Dive into the Shamim Ripon's collaboration.

Top Co-Authors

Avatar

Nilanjan Dey

Techno India College of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael Butler

University of Southampton

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge