Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Iman Avazpour is active.

Publication


Featured researches published by Iman Avazpour.


Recommendation systems in software engineering | 2014

Dimensions and Metrics for Evaluating Recommendation Systems

Iman Avazpour; Teerat Pitakrat; Lars Grunske; John C. Grundy

Recommendation systems support users and developers of various computer and software systems to overcome information overload, perform information discovery tasks, and approximate computation, among others. They have recently become popular and have attracted a wide variety of application scenarios ranging from business process modeling to source code manipulation. Due to this wide variety of application domains, different approaches and metrics have been adopted for their evaluation. In this chapter, we review a range of evaluation metrics and measures as well as some approaches used for evaluating recommendation systems. The metrics presented in this chapter are grouped under sixteen different dimensions, e.g., correctness, novelty, coverage. We review these metrics according to the dimensions to which they correspond. A brief overview of approaches to comprehensive evaluation using collections of recommendation system dimensions and associated metrics is presented. We also provide suggestions for key future research and practice directions.


Radiology and Oncology | 2009

Segmenting CT images of bronchogenic carcinoma with bone metastases using PET intensity markers approach

Iman Avazpour; Ros Ernida Roslan; Peyman Bayat; M. Iqbal Saripan; Abdul Jalil Nordin; Raja Syamsul Azmir Raja Abdullah

Segmenting CT images of bronchogenic carcinoma with bone metastases using PET intensity markers approach Background. The evolution of medical imaging plays a vital role in the management of patients with cancer. In oncology, the impact of PET/CT imaging has been contributing widely to the patient treatment by its large advantages over anatomical imaging from screening to staging. PET images provide the functional activity inside the body while CT images demonstrate the anatomical information. Hence, the existence of cancer cells can be recognized in PET image but since the structural location and position cannot be defined on PET images, we need to retrieve the information from CT images. Methods. In this study, we highlight the localization of bronchogenic carcinoma by using high activity points on PET image as references to extract regions of interest on CT image. Once PET and CT images have been registered using cross correlation, coordinates of the candidate points from PET are fed into seeded region growing algorithm to define the boundary of lesion on CT. The region growing process continues until a significant change in bilinear pixel values is reached. Results. The method has been tested over eleven images of a patient having bronchogenic carcinoma with bone metastases. The results show that the mean standard error for over segmented pixels is 33% while for the under segmented pixels is 3.4%. Conclusions. Although very simple in implementation, region growing can result in good precision ROIs. The region growing method highly depends on where the growing process starts. Here, by using the data acquired from other modality, we tried to guide the segmentation process to achieve better segmentation results.


Proceedings of the Second International Workshop on Software Engineering for Embedded Systems | 2012

Robust ArcheOpterix: architecture optimization of embedded systems under uncertainty

Indika Meedeniya; Aldeida Aleti; Iman Avazpour; Ayman Amin

Design of embedded systems involves a number of architecture decisions which have a significant impact on its quality. Due to the complexity of todays systems and the large design options that need to be considered, making these decisions is beyond the capabilities of human comprehension and makes the architectural design a challenging task. Several tools and frameworks have been developed, which automate the search for optimal or near-optimal design decisions based on quantitative architecture evaluations for different quality attributes. However, current approaches use approximations for a series of model parameters which may not be accurate and have to be estimated subject to heterogeneous uncertain factors. We have developed a framework which considers the uncertainty of design-time parameter estimates, and optimizes embedded system architectures for robust quality goals. The framework empowers conventional architecture optimization approaches with modeling and tool support for architecture description, model evaluation and architecture optimization on the face of uncertainty.


symposium on visual languages and human-centric computing | 2012

CONVErT: A framework for complex model visualisation and transformation

Iman Avazpour; John C. Grundy

Model Driven Engineering (MDE) has become a commonly used approach in software engineering. It promotes using models as primary artefacts and proposes methods for transforming them to desired software products. However, the specification of models and their transformations in MDE with current techniques is not user-friendly, due to excessive use of high level abstract models and textual representation of transformation languages. This paper briefly describes CONVErT, an approach and tool developed for user-centric transformation generation using concrete model visualisations.


Journal of Visual Languages and Computing | 2015

Specifying model transformations by direct manipulation using concrete visual notations and interactive recommendations

Iman Avazpour; John C. Grundy; Lars Grunske

Model transformations are a crucial part of Model-Driven Engineering (MDE) technologies but are usually hard to specify and maintain for many engineers. Most current approaches use meta-model-driven transformation specification via textual scripting languages. These are often hard to specify, understand and maintain. We present a novel approach that instead allows domain experts to discover and specify transformation correspondences using concrete visualizations of example source and target models. From these example model correspondences, complex model transformation implementations are automatically generated. We also introduce a recommender system that helps domain experts and novice users find possible correspondences between large source and target model visualization elements. Correspondences are then specified by directly interacting with suggested recommendations or drag and drop of visual notational elements of source and target visualizations. We have implemented this approach in our prototype tool-set, CONVErT, and applied it to a variety of model transformation examples. Our evaluation of this approach includes a detailed user study of our tool and a quantitative analysis of the recommender system. Author-HighlightsWe provide direct manipulation of visual notations for model transformation tasks.A recommender system is used to suggest source and target model correspondences.We provide a tool, CONVErT, to realize the approach.User study shows acceptance of concrete visual approach for model transformation.


Biological Procedures Online | 2009

Segmentation of Extrapulmonary Tuberculosis Infection Using Modified Automatic Seeded Region Growing

Iman Avazpour; M. Iqbal Saripan; Abdul Jalil Nordin; Rajaa Syamsul Azmir Iman Abdullah

In the image segmentation process of positron emission tomography combined with computed tomography (PET/CT) imaging, previous works used information in CT only for segmenting the image without utilizing the information that can be provided by PET. This paper proposes to utilize the hot spot values in PET to guide the segmentation in CT, in automatic image segmentation using seeded region growing (SRG) technique. This automatic segmentation routine can be used as part of automatic diagnostic tools. In addition to the original initial seed selection using hot spot values in PET, this paper also introduces a new SRG growing criterion, the sliding windows. Fourteen images of patients having extrapulmonary tuberculosis have been examined using the above-mentioned method. To evaluate the performance of the modified SRG, three fidelity criteria are measured: percentage of under-segmentation area, percentage of over-segmentation area, and average time consumption. In terms of the under-segmentation percentage, SRG with average of the region growing criterion shows the least error percentage (51.85%). Meanwhile, SRG with local averaging and variance yielded the best results (2.67%) for the over-segmentation percentage. In terms of the time complexity, the modified SRG with local averaging and variance growing criterion shows the best performance with 5.273 s average execution time. The results indicate that the proposed methods yield fairly good performance in terms of the over- and under-segmentation area. The results also demonstrated that the hot spot values in PET can be used to guide the automatic segmentation in CT image.


symposium on visual languages and human-centric computing | 2016

A domain-specific visual modeling language for testing environment emulation

Jian Liu; John C. Grundy; Iman Avazpour; Mohamed Abdelrazek

Software integration testing plays an increasingly important role as the software industry has experienced a major change from isolated applications to highly distributed computing environments. Conducting integration testing is a challenging task because it is often very difficult to replicate a real enterprise environment. Emulating testing environment is one of the key solutions to this problem. However, existing specification-based emulation techniques require manual coding of their message processing engines, therefore incurring high development cost. In this paper, we present a suite of domain-specific visual modeling languages to describe emulated testing enviroements at a high abstraction level. Our solution allows domain experts to model a testing environment from abstract interface layers. These layer models are then transformed to runtime environment for application testing. Our user study shows that our visual languages are easy to use, yet with sufficient expressive power to model complex testing applications.


symposium on visual languages and human-centric computing | 2015

A multi-view framework for generating mobile apps

Scott Barnett; Iman Avazpour; Rajesh Vasa; John C. Grundy

This paper demonstrates a multi-view framework for Rapid APPlication Tool (RAPPT). RAPPT enables rapid development of mobile applications. It employs a multilevel approach to mobile application development: a Domain Specific Visual Language to define the high level structure of mobile apps, a Domain Specific Textual Language to define behavioural concepts, and concrete source code for fine grained improvements.


automated software engineering | 2013

Tool support for automatic model transformation specification using concrete visualisations

Iman Avazpour; John C. Grundy; Lars Grunske

Complex model transformation is crucial in several domains, including Model-Driven Engineering (MDE), information visualisation and data mapping. Most current approaches use meta-model-driven transformation specification via coding in textual scripting languages. This paper demonstrates a novel approach and tool support that instead provides for specification of correspondences between models using concrete visualisations of source and target models, and generates transformation scripts from these by-example model correspondence specifications.


symposium on visual languages and human-centric computing | 2017

Visualising melbourne pedestrian count

Humphrey O. Obie; Caslon Chua; Iman Avazpour; Mohamed Abdelrazek; John C. Grundy

We present a visualisation of Melbourne pedestrian count data and a visual metaphor for representing hour-level temporal dimension in this context. The pedestrian count data is captured from sensors located around the city. A visualisation web application is implemented that incorporates a thematic map of these sensor locations with a 24-hour clocklike polygon that shows pedestrian counts at every hour, and alongside a display of daily temperature. Our visualisation allows users to analyse how the city is used by pedestrians. Moreover, the design of our visualisation was driven by the type of analysis tasks carried out by city planners. The visualisation would help city planners better understand the dynamics of pedestrian activity within the city and aid them in urban management and design policy recommendation.

Collaboration


Dive into the Iman Avazpour's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jian Liu

Swinburne University of Technology

View shared research outputs
Top Co-Authors

Avatar

Rajesh Vasa

Swinburne University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hai Le Vu

Swinburne University of Technology

View shared research outputs
Top Co-Authors

Avatar

Scott Barnett

Swinburne University of Technology

View shared research outputs
Top Co-Authors

Avatar

Lars Grunske

University of Stuttgart

View shared research outputs
Researchain Logo
Decentralizing Knowledge