Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Fahim Arif is active.

Publication


Featured researches published by Fahim Arif.


international conference on document analysis and recognition | 2011

Edge-Based Features for Localization of Artificial Urdu Text in Video Images

Akhtar Jamil; Imran Siddiqi; Fahim Arif; Ahsen Raza

Content-based video indexing and retrieval has become an interesting research area with the tremendous growth in the amount of digital media. In addition to the audio-visual content, text appearing in videos can serve as a powerful tool for semantic indexing and retrieval of videos. This paper proposes a method based on edge-features for horizontally aligned artificial Urdu text detection from video images. The system exploits edge based segmentation to extract textual content from videos. We first find the vertical gradients in the input video image and average the gradient magnitude in a fixed neighborhood of each pixel. The resulting image is binarized and the horizontal run length smoothing algorithm (RLSA) is applied to merge possible text regions. An edge density filter is then applied to eliminate noisy non-text regions. Finally, the candidate regions satisfying certain geometrical constraints are accepted as text regions. The proposed approach evaluated on a data set of 150 video images exhibited promising results.


international conference on frontiers in handwriting recognition | 2012

An Unconstrained Benchmark Urdu Handwritten Sentence Database with Automatic Line Segmentation

Ahsen Raza; Imran Siddiqi; Ali Abidi; Fahim Arif

In this paper we present and announce a novel off-line sentence database of Urdu handwritten documents along with a few preprocessing and text line segmentation procedures. Despite an increased research interest in Urdu handwritten document analysis over the recent years, a standard benchmark dataset, which could be used in Urdu handwriting recognition tasks, has been missing. Based on our own developed and updated corpus named CENIP-UCCP (Center for Image Processing-Urdu Corpus Construction Project), we have developed an Urdu handwritten database. The corpus is a collection of a variety of Urdu texts that were used to generate forms. These forms were subsequently filled by native writers in their natural handwritings. Six categories of text were used to generate these forms with each category using approximately 66 forms. Up till now, the database comprises 400 digitized forms produced by 200 different writers. The database is completely labeled for content information as well as content detection and supports the evaluation of systems like Urdu handwriting recognition, line segmentation and writer identification. The database was also experimented with the proposed Urdu text line segmentation scheme rendering promising segmentation results.


Proceedings of the 2010 National Software Engineering Conference on | 2010

Analysis of object oriented complexity and testability using object oriented design metrics

Sadaf Khalid; Saima Zehra; Fahim Arif

Wide applicability of object oriented technology in software development industry led to the development of high quality software products at the cost of increased complexity. The complexity of software systems directly contributes to increased testability efforts. This paper does review of testability and complexity of the software systems at the design level. Object oriented design metrics proposed in earlier research models is modified to analyze in detail the relationship between complexity, testability and different attributes of object oriented software design by predicting class level testability. Estimated results depict that different attributes of object oriented systems may add directly to the complexity of design requiring more testing efforts. The metrics proposed in this paper is further validated on four different software projects. Quantifiable results obtained justify the predicted relationship between object oriented design attributes, complexity and testability.


Future Generation Computer Systems | 2017

Smart urban planning using Big Data analytics to contend with the interoperability in Internet of Things

Muhammad Babar; Fahim Arif

Abstract The recent growth and expansion in the field of Internet of Things (IoT) is providing a great business prospective in the direction of the new era of smart urban. The insight of the smart urban is extensively preferred, as it improves the excellence of life of citizens, connecting several regulations, that is, smart transportation, smart parking, smart environment, smart healthcare, and so forth. Continuous intensification of the multifaceted urban set-up is extensively challenged by real-time processing of data and smart decision capabilities. Consequently, in this paper, we propose a smart city architecture which is based on Big Data analytics. The proposed scheme is comprised of three modules: (1) data acquisition and aggregation module collects varied and diverse data interrelated to city services, (2) data computation and processing module performs normalization, filtration, processing and data analysis, and (3) application and decision module formulates decisions and initiates events. The proposed architecture is a generic solution for the smart urban planning and variety of datasets is analyzed to validate this architecture. In addition, we tested reliable datasets on Hadoop server to verify the threshold limit value (TLV) and the investigation demonstrates that the proposed scheme offer valuable imminent into the community development systems to get better the existing smart urban architecture. Moreover, the efficiency of proposed architecture in terms of throughput is also shown.


international conference on connected vehicles and expo | 2013

Fairness improvement in long chain multihop wireless ad hoc networks

Fazlullah Khan; Syed Asif Kamal; Fahim Arif

In asymmetric multihop wireless ad hoc networks the original medium access control protocol does not perform well, especially when the offered load is high. Many papers have studied this issues, and premeditated to obtain better fairness in multihop wireless ad hoc networks. These techniques provide some degree of fairness but either provide low throughput or assume MAC layer fairness and hence need improvements. In this paper we have proposed a new method based probabilistic control on round robin queue, and have obtained better results in terms of fairness. Our proposed method perform better than FIFO scheduling, RR scheduling, and PCRQ scheduling. For evaluating the performance of the proposed method, we have used Network Simulator version-2 (NS-2). Simulation results have shown that our proposed method produces higher degree of fairness.


frontiers of information technology | 2011

Quantifying Non-functional Requirements in Service Oriented Development

Jawaria Sadiq; Athar Mohsin; Fahim Arif

This research is aimed at improving quality in service oriented applications by improving requirement engineering of quality requirements. Idea is to propose quantification mechanism that covers service development from consumer perspective and able to move back for better quality requirement management in service oriented development. Quantification mechanism is a two way affective method, first is to align quality in service development (service identification, service design, service implementation, service usage) and secondly its link with SLA enables both producer and consumer to make a check on quality. In this way, quality requirement are better developed, can be regularly checked and enhanced if required.


International Journal of Advanced Computer Science and Applications | 2015

Systematic Literature Review of Agile Scalability for Large Scale Projects

Hina Saeeda; Hannan Khalid; Mukhtar Ahmed; Abu Sameer; Fahim Arif

In new methods, “agile” has come out as the top approach in software industry for the development of the soft wares. With different shapes agile is applied for handling the issues such as low cost, tight time to market schedule continuously changing requirements, Communication & Coordination, team size and distributed environment. Agile has proved to be successful in the small and medium size project, however, it have several limitations when applied on large size projects. The purpose of this study is to know agile techniques in detail, finding and highlighting its restrictions for large size projects with the help of systematic literature review. The systematic literature review is going to find answers for the Research questions: 1) How to make agile approaches scalable and adoptable for large projects?2) What are the existing methods, approaches, frameworks and practices support agile process in large scale projects? 3) What are limitations of existing agile approaches, methods, frameworks and practices with reference to large scale projects? This study will identify the current research problems of the agile scalability for large size projects by giving a detail literature review of the identified problems, existed work for providing solution to these problems and will find out limitations of the existing work for covering the identified problems in the agile scalability. All the results gathered will be summarized statistically based on these finding remedial work will be planned in future for handling the identified limitations of agile approaches for large scale projects.


pacific-rim symposium on image and video technology | 2006

Projection method for geometric modeling of high resolution satellite images applying different approximations

Fahim Arif; Muhammad Akbar; An-Ming Wu

Precise remote sensing and high resolution satellite images have made it necessary to revise the geometric correction techniques used for ortho-rectification. There have been improvements in algorithms from simple 2D polynomial models to rigorous mathematical models derived from digital photogrammetry. In such scenario, conventional methods of photogrametric modeling of remotely sensed images would be insufficient for mapping purposes and might need to be substituted with a more rigorous approach to get a true orthophoto. To correct geometric distortions in these, the process of geometric modeling becomes important. Pixel projection method has been devised and used for geometric correction. Algorithm has been developed in C++ and used for FORMOSAT-2 high resolution satellite images. It geo-references a satellite image while geolocating vertices of the image with its geo-locations extracted from ancillary data. Accuracy and validity of the algorithm has already been tested on different types of satellite images. It takes a level-1A image and the output image is level-2 image. To increase the geometric accuracy, a set of ground control points with maximum accuracy can also be selected to determine the better knowledge of position, attitude and pixel alignment. In this paper, we have adopted different techniques of approximations and applying three possible methods of interpolation for transformations of image pixels to earth coordinate system. Results show that cubic convolution based modeling gives best suitable output pixel values while applying transformation.


international conference on mechatronics | 2005

Resampling air borne sensed data using bilinear interpolation algorithm

Fahim Arif; Muhammad Akbar

Air borne sensed data is gained by satellite/aerial images and other remote sensing or scanning platforms. Each grid-cell, or pixel, has a certain value depending on how the image was captured and what it represents. Reprojection and resampling of raster image data should be handled with caution. Issues of geometric distortions and errors due to resampling need to be considered carefully. Interpretation of such data requires correction of distortions and resampling of recorded data. There are different methods for this purpose but they have their inbuilt problems as well. This paper presents a bilinear interpolation method for resampling of air borne sensed data. The method is developed in Matlab and results are obtained using different interpolation rates. A review of the used method is also given at the end.


international conference on computational science and its applications | 2014

Modeling of Embedded System Using SysML and Its Parallel Verification Using DiVinE Tool

Muhammad Abdul Basit Ur Rahim; Fahim Arif; Jamil Ahmad

SysML is a modeling language that can be used for the modeling of embedded systems. It is rich enough to model critical and complex embedded systems. The available modeling tools have made the modeling of such large and complex systems much easier. They provide sufficient support for the specification of functional requirements in the elicitation phase as well as in the design phase by graphical modeling. These systems must be properly validated and verified before their manufacturing and deployment in order to increase their reliability and reduce their maintenance cost. In this paper, we have proposed a methodology for the modeling and verification of embedded systems in parallel and distributed environments. We demonstrate the suitability of the framework by applying it on the case study of embedded security system. The parallel model checking tool DiVinE has been used because the available sequential verification tools either fail or show poor performance. DiVinE supports Linear Temporal Logic (LTL) for defining nonfunctional requirements and DVE language for specifying models. First,the case study is modeled using SysML’s state machine diagrams and then semantics are described to translate these state machine diagrams to DVE based model. The translated model is verified against specified LTL properties using DiVinE.

Collaboration


Dive into the Fahim Arif's collaboration.

Top Co-Authors

Avatar

Muhammad Babar

National University of Sciences and Technology

View shared research outputs
Top Co-Authors

Avatar

Muhammad Akbar

National University of Sciences and Technology

View shared research outputs
Top Co-Authors

Avatar

Muhammad Abdul Basit-Ur-Rahim

National University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Abdul Ghafoor

National University of Sciences and Technology

View shared research outputs
Top Co-Authors

Avatar

Jamil Ahmad

National University of Sciences and Technology

View shared research outputs
Top Co-Authors

Avatar

Rabail Tahir

National University of Sciences and Technology

View shared research outputs
Top Co-Authors

Avatar

Muhammad Mohsin Riaz

COMSATS Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

Tabinda Sarwar

National University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

An-Ming Wu

National Space Organization

View shared research outputs
Top Co-Authors

Avatar

Adnan Rashdi

National University of Sciences and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge