Shekoofeh Azizi
University of British Columbia
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Shekoofeh Azizi.
medical image computing and computer assisted intervention | 2015
Shekoofeh Azizi; Farhad Imani; Bo Zhuang; Amir M. Tahmasebi; Jin Tae Kwak; Sheng Xu; Nishant Uniyal; Baris Turkbey; Peter L. Choyke; Peter A. Pinto; Bradford J. Wood; Mehdi Moradi; Parvin Mousavi; Purang Abolmaesumi
We propose an automatic feature selection framework for analyzing temporal ultrasound signals of prostate tissue. The framework consists of: 1) an unsupervised feature reduction step that uses Deep Belief Network (DBN) on spectral components of the temporal ultrasound data; 2) a supervised fine-tuning step that uses the histopathology of the tissue samples to further optimize the DBN; 3) a Support Vector Machine (SVM) classifier that uses the activation of the DBN as input and outputs a likelihood for the cancer. In leave-one-core-out cross-validation experiments using 35 biopsy cores, an area under the curve of 0.91 is obtained for cancer prediction. Subsequently, an independent group of 36 biopsy cores was used for validation of the model. The results show that the framework can predict 22 out of 23 benign, and all of cancerous cores correctly. We conclude that temporal analysis of ultrasound data can potentially complement multi-parametric Magnetic Resonance Imaging (mp-MRI) by improving the differentiation of benign and cancerous prostate tissue.
information security and cryptology | 2014
Sepideh Akhavan Bitaghsir; Nader Karimi; Shekoofeh Azizi; Shadrokh Samavi
Digital image watermarking has been proposed in order to protect the intellectual property of digital images. Although the stereo image pairs and their applications are becoming increasingly popular, very few algorithms have been proposed for copyright protection of this type of images. The existing methods do not consider the binocular visibility of stereo pairs. Due to the fact that, human perceptual system does not perceive the left and right images of stereo pairs independently, recently a binocular just noticeable difference model (BJND) is proposed. The BJND model describes the sensitivity of the human visual system to luminance changes in stereo images. In this paper, we propose a new stereo image watermarking algorithm based on BJND model. To this end, the proposed method embeds watermark in the DCT coefficients of contourlet transform. This proposed method, adaptively changes the embedding strength factor (a) according to the BJND values of two corresponding stereo blocks. Experimental results demonstrate that the proposed algorithm can achieve a tradeoff between robustness and imperceptibility, while preserving the binocular visibility threshold of stereo pairs.
medical image computing and computer assisted intervention | 2016
Shekoofeh Azizi; Farhad Imani; Jin Tae Kwak; Amir M. Tahmasebi; Sheng Xu; Pingkun Yan; Jochen Kruecker; Baris Turkbey; Peter L. Choyke; Peter A. Pinto; Bradford J. Wood; Parvin Mousavi; Purang Abolmaesumi
We propose a cancer grading approach for transrectal ultrasound-guided prostate biopsy based on analysis of temporal ultrasound signals. Histopathological grading of prostate cancer reports the statistics of cancer distribution in a biopsy core. We propose a coarse-to-fine classification approach, similar to histopathology reporting, that uses statistical analysis and deep learning to determine the distribution of aggressive cancer in ultrasound image regions surrounding a biopsy target. Our approach consists of two steps; in the first step, we learn high-level latent features that maximally differentiate benign from cancerous tissue. In the second step, we model the statistical distribution of prostate cancer grades in the space of latent features. In a study with 197 biopsy cores from 132 subjects, our approach can effectively separate clinically significant disease from low-grade tumors and benign tissue. Further, we achieve the area under the curve of 0.8 for separating aggressive cancer from benign tissue in large tumors.
Multimedia Tools and Applications | 2018
Majid Mohrekesh; Shekoofeh Azizi; Shahram Shirani; Nader Karimi; Shadrokh Samavi
Increasing production and exchange of multimedia content have increased the need for better protection of copyright using watermarking. Different methods have been proposed to satisfy the tradeoff between imperceptibility and robustness as two important characteristics in watermarking while maintaining proper data-embedding capacity. Many watermarking methods use independent image set of parameters. Different images possess different potentials for the robust and transparent hosting of watermark data. To overcome this deficiency, in this paper we have proposed a new hierarchical adaptive watermarking framework. At the higher level of the hierarchy, the complexity of an image is ranked in comparison with complexities of images of a dataset. For a typical dataset of images, the statistical distribution of block complexities is found. At the lower level of the hierarchy, for a single cover image that is to be watermarked, complexities of blocks can be found. Local complexity variation among a block and its neighbors is used to change the watermark strength factor of each block adaptively. Such local complexity analysis creates an adaptive embedding scheme, which results in higher transparency by reducing blockiness effects. This two-level hierarchy has enabled our method to take advantage of all image blocks to elevate the embedding capacity while preserving imperceptibility. For testing the effectiveness of the proposed framework, contourlet transform in conjunction with discrete cosine transform is used to embed pseudorandom binary sequences as a watermark. Experimental results show that the proposed framework elevates the performance the watermarking routine regarding both robustness and transparency.
computer assisted radiology and surgery | 2018
Shekoofeh Azizi; Nathan Van Woudenberg; Samira Sojoudi; Ming Li; Sheng Xu; Emran Mohammad Abu Anas; Pingkun Yan; Amir M. Tahmasebi; Jin Tae Kwak; Baris Turkbey; Peter L. Choyke; Peter A. Pinto; Bradford J. Wood; Parvin Mousavi; Purang Abolmaesumi
PurposeWe have previously proposed temporal enhanced ultrasound (TeUS) as a new paradigm for tissue characterization. TeUS is based on analyzing a sequence of ultrasound data with deep learning and has been demonstrated to be successful for detection of cancer in ultrasound-guided prostate biopsy. Our aim is to enable the dissemination of this technology to the community for large-scale clinical validation.MethodsIn this paper, we present a unified software framework demonstrating near-real-time analysis of ultrasound data stream using a deep learning solution. The system integrates ultrasound imaging hardware, visualization and a deep learning back-end to build an accessible, flexible and robust platform. A client–server approach is used in order to run computationally expensive algorithms in parallel. We demonstrate the efficacy of the framework using two applications as case studies. First, we show that prostate cancer detection using near-real-time analysis of RF and B-mode TeUS data and deep learning is feasible. Second, we present real-time segmentation of ultrasound prostate data using an integrated deep learning solution.ResultsThe system is evaluated for cancer detection accuracy on ultrasound data obtained from a large clinical study with 255 biopsy cores from 157 subjects. It is further assessed with an independent dataset with 21 biopsy targets from six subjects. In the first study, we achieve area under the curve, sensitivity, specificity and accuracy of 0.94, 0.77, 0.94 and 0.92, respectively, for the detection of prostate cancer. In the second study, we achieve an AUC of 0.85.ConclusionOur results suggest that TeUS-guided biopsy can be potentially effective for the detection of prostate cancer.
Proceedings of SPIE | 2017
Sharareh Bayat; Farhad Imani; Carlos D. Gerardo; Guy Nir; Shekoofeh Azizi; Pingkun Yan; Amir M. Tahmasebi; Storey Wilson; Kenneth A. Iczkowski; M. Scott Lucia; Larry Goldenberg; Septimiu E. Salcudean; Parvin Mousavi; Purang Abolmaesumi
Temporal enhanced ultrasound (TeUS) is an imaging approach where a sequence of temporal ultrasound data is acquired and analyzed for tissue typing. Previously, in a series of in vivo and ex vivo studies we have demonstrated that, this approach is effective for detecting prostate and breast cancers. Evidences derived from our experiments suggest that both ultrasound-signal related factors such as induced heat and tissue-related factors such as the distribution and micro-vibration of scatterers lead to tissue typing information in TeUS. In this work, we simulate mechanical micro-vibrations of scatterers in tissue-mimicking phantoms that have various scatterer densities reflecting benign and cancerous tissue structures. Finite element modeling (FEM) is used for this purpose where the vertexes are scatterers representing cell nuclei. The initial positions of scatterers are determined by the distribution of nuclei segmented from actual digital histology scans of prostate cancer patients. Subsequently, we generate ultrasound images of the simulated tissue structure using the Field II package resulting in a temporal enhanced ultrasound. We demonstrate that the micro-vibrations of scatterers are captured by temporal ultrasound data and this information can be exploited for tissue typing.
computer assisted radiology and surgery | 2016
Shekoofeh Azizi; Farhad Imani; Sahar Ghavidel; Amir M. Tahmasebi; Jin Tae Kwak; Sheng Xu; Baris Turkbey; Peter L. Choyke; Peter A. Pinto; Bradford J. Wood; Parvin Mousavi; Purang Abolmaesumi
iranian conference on electrical engineering | 2013
Shekoofeh Azizi; Majid Mohrekesh; Shadrokh Samavi
computer assisted radiology and surgery | 2017
Shekoofeh Azizi; Sharareh Bayat; Pingkun Yan; Amir M. Tahmasebi; Guy Nir; Jin Tae Kwak; Sheng Xu; Storey Wilson; Kenneth A. Iczkowski; M. Scott Lucia; Larry Goldenberg; Septimiu E. Salcudean; Peter A. Pinto; Bradford J. Wood; Purang Abolmaesumi; Parvin Mousavi
computer assisted radiology and surgery | 2017
Shekoofeh Azizi; Parvin Mousavi; Pingkun Yan; Amir M. Tahmasebi; Jin Tae Kwak; Sheng Xu; Baris Turkbey; Peter L. Choyke; Peter A. Pinto; Bradford J. Wood; Purang Abolmaesumi