Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Haslina Md Sarkan is active.

Publication


Featured researches published by Haslina Md Sarkan.


ieee conference on open systems | 2011

Traffic signal control based on Adaptive Neural-Fuzzy Inference System applied to intersection

Azura Che Soh; Ribhan Zafira Abdul Rahman; Lai Ghuan Rhung; Haslina Md Sarkan

Adaptive Neural-Fuzzy Inference System (ANFIS) that integrates the best features of fuzzy systems and neural networks has been widely applied in many areas. It can be applied to synthesize controllers, which are able to tune the fuzzy control system automatically, and models that learn from past data to predict future behavior. The aim of this research is to develop an ANFIS traffic signals controller for multilane intersection in order to ease traffic congestions at traffic intersections. The new concept to generate sample data for ANFIS training is introduced in this research. The sample data is generate based on fuzzy rules and can be analysed using tree diagram. This controller is simulated on multilane traffic intersection model developed using M/M/1 queuing theory and its performance in terms of average waiting time, queue length and delay time are compared with traditional controllers and fuzzy controller. Simulation result shows that the average waiting time, queue length, and delay time of ANFIS traffic signal controller are the lowest as compared to the other three controllers. In conclusion, the efficiency and performance of ANFIS controller are much better than that of fuzzy and traditional controllers in different traffic volumes.


2011 IEEE International Systems Conference | 2011

Robust corrective and preventive action(CAPA)

Amir Hatami Hardoroudi; Ali Farhang Dareshuri; Haslina Md Sarkan; Mona Nourizadeh

Corrective and preventive action (CAPA) procedure in most organizations is merely used to log problems. These organizations do not implement a robust CAPA process, and as a result they are still struggling today with CAPA. Many firms have been using root cause analysis and corrective action program. A case study on a company which has long recognized that using a good root cause, corrective and preventive action program is crucial to satisfy its customers, has been conducted. This is to help managers to make decisions effectively. Initially CAPA was not properly implemented and not cost-effective for the company. For example, it could not meet the operational managements needs timely, efficiently and effectively. Therefore, the authors had conducted a secondary research and adopted a universal process & quality standards from CMMI Level 5 & ISO 20000 on the top levels of the CAPA procedure in order to improve the CAPA issues.


international colloquium on signal processing and its applications | 2016

Single image Super Resolution by no-reference image quality index optimization in PCA subspace

Brian Sumali; Haslina Md Sarkan; Nozomu Hamada; Yasue Mitsukura

Principal Component Analysis (PCA) has been effectively applied for solving atmospheric-turbulence degraded images. PCA-based approaches improve the image quality by adding high-frequency components extracted using PCA to the blurred image. The PCA-based restoration process is similar with conventional single-frame Super-Resolution (SR) methods, which perform SR process by improving the edges portion of low-resolution images. This paper aims to introduce PCA-based restoration to solve SR problem with additive white Gaussian noise. We conducted experiments using standard image database and show comparative result with the latest deep-learning SR approach.


advanced information networking and applications | 2014

Multi-criteria Consensus-Based Service Selection Using Crowdsourcing

Mahdi Sharifi; Azizah Abdul Manaf; Ali Memariani; Homa Movahednejad; Haslina Md Sarkan; Amir Vahid Dastjerdi

Different evaluator entities, either human agents (e.g., experts) or software agents (e.g., monitoring services), are involved to assess QoS parameters of candidate services, which leads to diversity in service assessments. This diversity makes the service selection a challenging task, especially when numerous qualities of service criteria and range of providers are considered. To address this problem, this study as first step presents a consensus-based service assessment methodology that utilizes consensus theory to evaluate the service behavior for single QoS criteria using the power of crowd sourcing. For this purpose, trust level metrics are introduced to measure the strength of a consensus based on the trustworthiness levels of crowd members. The peers converged to the most trustworthy evaluation. Next, the fuzzy inference engine was used to aggregate each obtained assessed QoS value based on user preferences because we address multiple QoS criteria in real life scenarios.


INTERNATIONAL CONFERENCE ON QUANTITATIVE SCIENCES AND ITS APPLICATIONS (ICOQSIA 2014): Proceedings of the 3rd International Conference on Quantitative Sciences and Its Applications | 2014

Quantitative measure in image segmentation for skin lesion images: A preliminary study

Nurulhuda Firdaus Mohd Azmi; Mohd Hakimi Aiman Ibrahim; Lau Hui Keng; Nuzulha Khilwani Ibrahim; Haslina Md Sarkan

Automatic Skin Lesion Diagnosis (ASLD) allows skin lesion diagnosis by using a computer or mobile devices. The idea of using a computer to assist in diagnosis of skin lesions was first proposed in the literature around 1985. Images of skin lesions are analyzed by the computer to capture certain features thought to be characteristic of skin diseases. These features (expressed as numeric values) are then used to classify the image and report a diagnosis. Image segmentation is often a critical step in image analysis and it may use statistical classification, thresholding, edge detection, region detection, or any combination of these techniques. Nevertheless, image segmentation of skin lesion images is yet limited to superficial evaluations which merely display images of the segmentation results and appeal to the readers intuition for evaluation. There is a consistent lack of quantitative measure, thus, it is difficult to know which segmentation present useful results and in which situations they do so. If segmentation is done well, then, all other stages in image analysis are made simpler. If significant features (that are crucial for diagnosis) are not extracted from images, it will affect the accuracy of the automated diagnosis. This paper explore the existing quantitative measure in image segmentation ranging in the application of pattern recognition for example hand writing, plat number, and colour. Selecting the most suitable segmentation measure is highly important so that as much relevant features can be identified and extracted.


ieee conference on open systems | 2011

Implementing corrective and preventive actions in risk assessment software

Ali Farhang Dareshuri; Elnaz Farhang Darehshori; Amir Hatami Hardoroudi; Haslina Md Sarkan

Many IT companies are faced with an alarming rate of risks in their businesses. This can be a result of lack of experiences to control re-occurrence and prevent new risks according to the previous projects experience. On the other hand, Corrective and Preventive Actions (CAPA) are known concepts for learning from experiences to avoid non-conformities. The successful implementations of CAPA in different systems have convinced the authors to use CAPA in Risk Assessment Process to decrease the likelihood and impact of risks by learning from the past. To achieve this, our application should survey and rank the possibility and effect of risks, as well as their related elements (such as assets, threats, safeguards, vulnerabilities and team accountability). After each appearance of a risk, the system should review and correct the ranks. It must perform a Root Cause Analysis (RCA). Another requirement for the system is to be able to find similar items. The system can distinguish the similarities by categorizing risk related elements according to their properties and finding possible comparable substances. In addition, this system also uses its data to produce useful high level recommendations and other required documents. By applying this model and after identifying all weaknesses, the value of risks abates dramatically. This Corrective Action and Preventive Actions in Risk Assessment (CAPRA) has been successfully designed and implemented as a web based application.


student conference on research and development | 2006

PC-Based Vision System with Pan-Tilt Platform for Face Tracking

Chua Teck Wee; Mohamad Shukri Zainal Abidin; Haslina Md Sarkan

Face tracking has emerged as an area of research with exciting possibilities. This paper describes a fully tested prototype system for face tracking which uses a web-cam as sensor, incorporated with a closed-loop visual feedback control system for the pan-tilt mechanism. Continuous adaptive mean shift (CamShift) algorithm, has been adopted to perform the face tracking task. The algorithm, which is simple and computationally efficient, operates on a 2D colour probability distribution image produced from histogram back-projection and it is designed for dynamically changing distributions. The performance of the prototype has been very satisfactory and can be implemented in real-time application.


International Conference of Reliable Information and Communication Technology | 2018

Analyzing Traffic Accident and Casualty Trend Using Data Visualization

Abdullah Sakib; Saiful Adli Ismail; Haslina Md Sarkan; Azri Azmi; Othman Mohd Yusop

Motor vehicle is the backbone of the modern transportation system worldwide. However, the excessive number of motor vehicles tend to cause traffic accidents leading to numerous casualties. Analyzing existing works on this area, this study has identified prime reasons behind traffic accidents and casualties. They include driving over the speed limit, Age of drivers and pedestrians, environmental condition, location and road types. It has also reviewed and identified several data visualization methods and visualization techniques that have been proposed by many researchers. The objective of this research endeavour is to identify the factors behind traffic accidents, determine the techniques that are used to visualize data, develop a dashboard using data visualization tools to visualize traffic accident trend and to evaluate the functionality of the dashboard which is developed on United Kingdom’s (UK) traffic accident dataset from 2014 to 2016. Upon performing data cleaning, pre-processing and filtering, the raw data has been converted into cleaned, filtered and processed data to create a coherent and properly linked data model. Then, using the Power BI visualization tool, various interactive visualizations have been produced that illustrated several significant trends in accident and casualties. The visualization trend revealed that between 2014 to 2016, majority of the accidents in the UK occurred in the urban area, in the single carriageway, on the dry road surface, under the daylight with fine weather, and when the speed limit was below 30 mph. This research may assist UK’s traffic management authority to identify the underlying factors behind the traffic accident and to detect the traffic accident and casualty trend in order to take necessary steps to reduce casualties in traffic accidents.


international conference on research and innovation in information systems | 2017

The challenges of Extract, Transform and Loading (ETL) system implementation for near real-time environment

Adilah Sabtu; Nurulhuda Firdaus Mohd Azmi; Nilam Nur Amir Sjarif; Saiful Adli Ismail; Othman Mohd Yusop; Haslina Md Sarkan; Suriayati Chuprat

Organization with considerable investment into data warehousing, the influx of various data types and forms requires certain ways of prepping data and staging platform that support fast, efficient and volatile data to reach its targeted audiences or users of different business needs. Extract, Transform and Load (ETL) system proved to be a choice standard for managing and sustaining the movement and transactional process of the valued big data assets. However, traditional ETL system can no longer accommodate and effectively handle streaming or near real-time data and stimulating environment which demands high availability, low latency and horizontal scalability features for functionality. This paper identifies the challenges of implementing ETL system for streaming or near real-time data which needs to evolve and streamline itself with the different requirements. Current efforts and solution approaches to address the challenges are presented. The classification of ETL system challenges are prepared based on near real-time environment features and ETL stages to encourage different perspectives for future research.


international conference on computer and information sciences | 2016

ABCD rules segmentation on malignant tumor and benign skin lesion images

Nurulhuda Firdaus Mohd Azmi; Haslina Md Sarkan; Yazriwati Yahya; Suriayati Chuprat

Skin lesion is defined as a superficial growth or patch of the skin that is visually different than its surrounding area. Skin lesions appear for many reasons such as the symptoms indicative of diseases, birthmarks, allergic reactions, and so on. Images of skin lesions are analyzed by computer to capture certain features to be characteristic of skin diseases. These activities can be defined as automated skin lesion diagnosis (ASLD). ASLD involves five steps including image acquisition, pre-processing to remove occluding artifacts (such as hair), segmentation to extract regions of interest, feature selection and classification. This paper present analysis of automated segmentation called the ABCD rules (Asymmetry, Border irregularity, Color variegation, Diameter) in image segmentation. The experiment was carried on Malignant tumor and Benign skin lesion images. The study shows that the ABCD rules has successfully classify the images with high value of total dermatoscopy score (TDS). Although some of the analysis shows false alarm result, it may give the significant input to search suitable segmentation measure.

Collaboration


Dive into the Haslina Md Sarkan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Othman Mohd Yusop

Universiti Teknologi Malaysia

View shared research outputs
Top Co-Authors

Avatar

Saiful Adli Ismail

Universiti Teknologi Malaysia

View shared research outputs
Top Co-Authors

Avatar

Suriayati Chuprat

Universiti Teknologi Malaysia

View shared research outputs
Top Co-Authors

Avatar

Azizah Abdul Manaf

Universiti Teknologi Malaysia

View shared research outputs
Top Co-Authors

Avatar

Azri Azmi

Universiti Teknologi Malaysia

View shared research outputs
Top Co-Authors

Avatar

Adilah Sabtu

Universiti Teknologi Malaysia

View shared research outputs
Top Co-Authors

Avatar

Ali Farhang Dareshuri

Universiti Teknologi Malaysia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Homa Movahednejad

Universiti Teknologi Malaysia

View shared research outputs
Researchain Logo
Decentralizing Knowledge