Qutrit-inspired Fully Self-supervised Shallow Quantum Learning Network for Brain Tumor Segmentation
Debanjan Konar, Siddhartha Bhattacharyya, Bijaya K. Panigrahi, Elizabeth Behrman
11 Qutrit-inspired Fully Self-supervised ShallowQuantum Learning Network for Brain TumorSegmentation
Debanjan Konar,
MIEEE,
Siddhartha Bhattacharyya,
SMIEEE,
Bijaya K. Panigrahi,
SMIEEE, andElizabeth Behrman
Abstract —Classical self-supervised networks suffer from con-vergence problems and reduced segmentation accuracy dueto forceful termination.
Qubits or bi-level quantum bits oftendescribe quantum neural network models. In this article, anovel self-supervised shallow learning network model exploitingthe sophisticated three-level qutrit-inspired quantum informationsystem referred to as Quantum Fully Self-Supervised NeuralNetwork (QFS-Net) is presented for automated segmentationof brain MR images. The QFS-Net model comprises a trin-ity of a layered structure of qutrits inter-connected throughparametric Hadamard gates using an 8-connected second-orderneighborhood-based topology. The non-linear transformation ofthe qutrit states allows the underlying quantum neural networkmodel to encode the quantum states, thereby enabling a fasterself-organized counter-propagation of these states between thelayers without supervision. The suggested QFS-Net model istailored and extensively validated on Cancer Imaging Archive(TCIA) data set collected from Nature repository and alsocompared with state of the art supervised (U-Net and URes-Netarchitectures) and the self-supervised QIS-Net model. Resultsshed promising segmented outcome in detecting tumors in termsof dice similarity and accuracy with minimum human interven-tion and computational resources.
Index Terms —Quantum Computing, Qutrit, QIS-Net, MRimage segmentation, U-Net and URes-Net.
I. I
NTRODUCTION Q uantum computing supremacy may be achieved throughthe superposition of quantum states or quantum par-allelism and quantum entanglement [1]. However, owingto the lack of computing resources for the implementationof quantum algorithms, it is an uphill task to explore thequantum entanglement properties for optimized computation.Nowadays, with the advancement in quantum algorithms, theclassical systems embedded in quantum formalism and in-spired by qubits cannot exploit the full advantages of quantumsuperposition and quantum entanglement [2]–[4]. Due to theintrinsic characteristics offered by quantum mechanics, theimplementation of Quantum-inspired Artificial Neural Net-works (QANN) has been proven to be successful in solvingspecific computing tasks like image classification, pattern D. Konar and B. K. Panigrahi are with the Department of ElectricalEngineering, Indian Institute of Technology Delhi, New Delhi, India, Email:[email protected] and [email protected]. Bhattacharyya is with the Department of Computer Science and En-gineering, CHRIST (Deemed to be University), Bangalore, India, Email:[email protected] IndiaE. Behrman is with the Department of Mathematics and Physics, WichitaState University, Wichita, Kansas, USA, Email: [email protected] recognition [5]–[8]. Nevertheless, quantum neural networkmodels [9], [10] implemented on actual quantum processorsare realized using a large number of quantum bits or qubits as matrix representation and as well as linear operations onthese vector matrices. However, owing to complex and time-intensive quantum back-propagation algorithms involved in thesupervised quantum-inspired neural network (QINN) architec-tures [11], [12], the computational complexity increases many-fold with an increase in the number of neurons and inter-layerinterconnections.Automatic segmentation of brain lesions from Magnetic Res-onance Imaging (MRI) greatly facilitates brain tumor iden-tification overcoming the manual laborious tasks of humanexperts or radiologists [13]. It contrasts with the manual braintumor diagnosis which suffers from significant variations inshape, size, orientation, intensity inhomogeneity, overlappingof gray-scales and inter-observer variability. Recent years havewitnessed substantial attention in developing robust and effi-cient automated MR image segmentation procedures amongthe researchers of the computer vision community.The current work focuses on a novel quantum fully self-supervised learning network (QFS-Net) characterized by qutrits for fast and accurate segmentation of brain lesions. Theprimary aim of the suggested work is to enable the QFS-Netfor faster convergence and making it suitable for fully auto-mated brain lesion segmentation obviating any kind of trainingor supervision. The proposed quantum fully self-supervisedneural network (QFS-Net) model relies on qutrits or three-level quantum states to exploit the features of quantum cor-relation. To eliminate the complex quantum back-propagationalgorithms used in the supervised QINN models, the QFS-Netresorts to a novel fully self-supervised qutrit based counterpropagation algorithm. This algorithm allows the propagationof quantum states between the network layers iteratively. Theprimary contributions of our manuscript are fourfold and arehighlighted as follows:1) Of late, the quantum neural network models and theirimplementation largely rely on qubits and hence, wehave proposed a novel qudit embedded generic quantumneural network model applicable for any level ofquantum states such as qubit, qutrit etc.2) An adaptive multi-class Quantum Sigmoid (
QSig ) acti-vation function embedded with quantum trit or qutrit isincorporated to address the wide variation of gray scales a r X i v : . [ qu a n t - ph ] S e p in MR images.3) The convergence analysis of the QFS-Net model isprovided, and its super-linearity is also demonstratedexperimentally. The proposed qutrit based quantumneural network model tries to explore the superpositionand entanglement properties of quantum computing inclassical simulations resulting in faster convergence ofthe network architecture yielding optimal segmentation.4) The suggested QFS-Net model is validated extensivelyusing Cancer Imaging Archive (TCIA) data set collectedfrom Nature repository [14]. Experimental results showthe efficacy of the proposed QFS-Net model in termsof dice similarity, thus promoting self-supervised proce-dures for medical image segmentation.The remaining sections of the article are organized asfollows. Section II reviews various supervised artificial neuralnetworks and deep neural network models useful for brainMR image segmentation. A brief introduction of qutrits andgeneralized D -level quantum states ( qudits ) is provided alongwith the preliminaries of quantum computing in Section III.A novel quantum neural network model characterized by qudits is illustrated in Section IV. A vivid description of thesuggested QFS-Net and its operations characterized by qutrit has been provided in Section V. Results and discussions shedlight on the experimental outcomes of the proposed neuralnetwork model in Section VI. Concluding remarks and futuredirections of research are confabulated in Section VII.II. L ITERATURE R EVIEW
Recent years have witnessed various machine learning clas-sifiers [15], [16] and deep learning technologies [17]–[20]for automated brain lesion segmentation for tumor detec-tion. Examples include U-Net [18] and UResNet [20], whichhave achieved remarkable dice score in auto-segmentationof medical images. Of late, Pereira et al. [21] suggested amodified Convolutional Neural Network (CNN) introducingsmall size kernels to obviate over-fitting. Moreover, CNNbased architectures suffers due to lack of manually segmentedor annotated MR images, intensive pre-processing and ex-pert image analysts. In these circumstances, self-supervisedor semi-supervised medical image segmentation is becomingpopular in the computer vision research community. Wang etal. [22] contributed an interactive method using deep learningwith image-specific tuning for medical image segmentation.Zhuang et al. [23] suggested a Rubik’s cube recovery basedself-supervised procedure for medical image segmentation.However, the interactive learning frameworks are not fullyself-supervised and suffer from the complex orientation andtime-intensive operations.Quantum Artificial Neural Networks (QANN) were first pro-posed in the 1990s [24]–[26], as a means of obviating someof the most recalcitrant problems that stand in the way of theimplementation of large scale quantum computing: algorithmdesign [27], noise and decoherence [28], [29], and scaleup[30]. Amalgamating artificial neural networks with intrinsicproperties of quantum computing enables the QANN modelsto evolve as promising alternatives to quantum algorithmic computing [31], [32]. Recent advances in both hardware andtheoretical development may enable their implementation onthe noisy intermediate scale (NISQ) computers that will soonbe available [24], [28], [29], [33]. Konar et al. [7], [8],Schutzhold et al. [34], Trugenberger et al. [35] and Masuyama et al. [36] suggested quantum neural networks for patternrecognition tasks which deserve special mention for theircontribution on QNN.The classical self-supervised neural network architecturesemployed for binary image segmentation suffer from slowconvergence problems [37], [38]. To overcome these chal-lenges, the authors proposed the quantum version of theclassical self-supervised neural network architecture relying on qubits for faster convergence and accurate image segmentationand implemented on classical systems [6]–[8]. Furthermore,the recently modified versions of the network architecturesrelying on qubits and characterized by multi-level activationfunction [39]–[41], are also validated on MR images for brainlesion segmentation and reported promising outcome whilecompared with current deep learning architectures. However,the implementation of these quantum neural network modelson classical systems is centred on the bi-level abstractionof qubits . In most physical implementations, the quantumstates are not inherently binary [42]; thus, the qubit modelis only an approximation that suppresses higher-level states.The qubit model can lead to slow and untimely convergenceand distorted outcomes. Here, three-level quantum states or qutrits (generally D -level qudits ) are introduced to improve theconvergence of the self-supervised quantum network models. A. Motivation
The motivation behind the proposed QFS-Net over the deeplearning based brain tumor segmentation [17], [18], [21], [22]are as follows:1) Huge volumes of annotated medical images are requiredfor suitable training of a convolutional neural network,and it is also a paramount task to acquire.2) The extensive and time-consuming training of deepneural network-based MR image segmentation requireshigh computational capabilities (GPU) and memory re-sources.3) In contrast to automatic brain lesion segmentation, theslow convergence and over-fitting problems often affectthe outcome, and hence extra efforts are required forsuitable tuning of hyper-parameters of the underlyingdeep neural network architecture.4) Moreover, the lack of image-specific adaptability of theconvolutional neural network leads to a fall in accuracyfor unseen medical image classes.A potential solution to devoid the requirement of training dataand the problems faced by intensely supervised convolutionalneural networks prevalent to medical image segmentationis a fully self-supervised neural network architecture withminimum human intervention. The novel qutrit-inspired fullyself-supervised quantum learning model incorporated in theQFS-Net architecture presented in this article is a formidable contribution in exploiting the information of the brain lesionsand poses a new horizon of research and challenges.III. F
UNDAMENTALS OF Q UANTUM C OMPUTING
Quantum computing offers the inherent features of superpo-sition, coherence, decoherence, and entanglement of quantummechanics in computational devices and enables implementa-tion of quantum computing algorithms [26]. Physical hardwarein classical systems uses binary logic; however, most quantumsystems have multiple ( D ) possible levels. States of thesesystems are referred to as qudits . A. Concept of Qudits
In contrast to a two-state quantum system, described by a qubit , a (
D > ) multilevel quantum system is representedby D basis states. We choose, as is usual the so-called“computational” basis: | (cid:105) , | (cid:105) , | (cid:105) , . . . | D − (cid:105) . A general purestate of the system is a superposition of these basis statesrepresented as | ψ (cid:105) = α | (cid:105) + α | (cid:105) + α | (cid:105) + . . . + α D − | D − (cid:105) = α α . . .α D − (1)subject to the normalization criterion | α | + | α | + . . . + | α D − | = 1 where, α , α , . . . α D − are complex quantities,i.e., { α i } ∈ C . Physically, the absolute magnitude squared ofeach coefficient α i represents the probability of the systembeing measured to be in the corresponding basis state | i (cid:105) .In this article, we use a three-level system ( D = 3 ), i.e., abasis of {| (cid:105) , | (cid:105) and | (cid:105)} for each quantum trit or qutrit . Onephysical example of a qutrit is a spin-1 particle. A generalpure (coherent) state of a qutrit [42] is a superposition of allthe three basis states, which can be represented as | ψ (cid:105) = α | (cid:105) + α | (cid:105) + α | (cid:105) = α α α (2)subject to the normalization criterion | α | + | α | + | α | = 1 .For example, the state | ψ (cid:105) = 2 √ | (cid:105) + √ √ | (cid:105) + √ √ | (cid:105) (3)has a probability of the being measured to be in the basis state | (cid:105) of |(cid:104) | ψ (cid:105)| = 310 (4)Similarly, the probabilities of the quantum state | ψ (cid:105) beingmeasured to be in each of the other two basis states | (cid:105) and | (cid:105) are and respectively. B. Quantum Operators
We define generalized Pauli operators on qudits as X = D − (cid:88) k =0 | k + 1( mod (cid:105)(cid:104) k | , Z = D − (cid:88) k =0 θ k | k (cid:105)(cid:104) k | (5) where θ = e πD is the D th complex root of unity. That is, theoperator X shifts a computational basis state | k (cid:105) to the nextstate, and the Z operator multiplies a computational basis stateby the appropriate phase factor. Note that these two operatorsgenerate the generalized Pauli group.The Hadamard gate is one of the basic constituents ofquantum algorithms, as its action creates superposition of thebasis states. On qutrits it is defined as [43] H = 1 √ e π e − π e − π e π (6)The special case of the generalized Hadamard gate for qudits is given by H| k (cid:105) = D − (cid:88) i =0 θ ik | i (cid:105) , where θ Di = cos( 2 iπD )+ sin( 2 iπD ) = e iπD (7)Here is the imaginary unit and the angle θ is the i th root of1.We define a rotation gate R ( ω ) = e ω , which transforms a qutrit in state ( α , α , α ) to the (rotated) state ( α (cid:48) , α (cid:48) , α (cid:48) ),as follows: α (cid:48) α (cid:48) α (cid:48) = ω −√ ω − cos ω √ sin ω ω −√ ω − cos ω √ ω ω × α α α (8)Note that the rotation gate defined above is a unitary operator.IV. Q UANTUM N EURAL N ETWORK M ODEL BASED ON Q UDITS (QNNM)A quantum neural network dealing with discrete data isrealized on a classical system using quantum algorithms andacts on quantum states through a layered architecture. In thisproposed qudit embedded quantum neural network model,the classical network inputs are converted into D -dimensionalquantum states [0 , πD ] or qudits . Let the k th input be given by x k . We apply a standard classical sigmoid activation function f QNNM ( x k ) , which yields binary classical outcome [0 , . f QNNM ( x k ) = 11 + e − x k (9)We then map that quantity onto the amplitude for the k th basisstate as | α k (cid:105) = ( 2 πD f QNNM ( x k )) (10)The suggested QNNM model comprises multiple D -dimensional qudits , Z D = { ( | z (cid:105) , | z (cid:105) , | z (cid:105) , . . . | z D (cid:105) ) T : | z k (cid:105) ∈ Z ( k = 1 , , . . . D ) } . The inner product between theinput quantum states | ψ D (cid:105) = ( | α (cid:105) , | α (cid:105) , | α (cid:105) , . . . , | α D (cid:105) ) T and the quantum weights | W D (cid:105) = ( | θ (cid:105) , | θ (cid:105) , | θ (cid:105) , . . . , | θ D (cid:105) ) T is defined as (cid:104) ψ D | W D (cid:105) = D (cid:88) k =1 (cid:104) α k | θ k (cid:105) = T D | α k (cid:105)H D | θ k (cid:105) = D (cid:88) k =1 α k | k (cid:105) ( D (cid:88) k =1 cos( 2 kπD ) + sin( 2 kπD )) (11) where T D and H D are the transformation (realization map-ping) and the Hadamard gate, respectively. ¯ ψ D is the complexconjugate of ψ D and is defined as (cid:104) ψ D | = | ψ D (cid:105) † = ( | ¯ α (cid:105) , | ¯ α (cid:105) , | ¯ α (cid:105) , . . . , | ¯ α D (cid:105) ) (12)In the suggested quantum neural network model, let us con-sider the set of all quantum states be denoted as Q D ( Z ) andthe D -dimensional realization transformation, T D : Q D ( Z ) → R D is defined as T | ψ D (cid:105) =( Re | α (cid:105) , Im | α (cid:105) , Re | α (cid:105) , Im | α (cid:105) , . . . Re | α D (cid:105) , Im | α D (cid:105) ) T (13)for all | ψ D (cid:105) = ( | α (cid:105) , | α (cid:105) , . . . | α D (cid:105) ) T ∈ Q D ( Z ) and ∀ i ∈ D, | α i (cid:105) = cos ω i | (cid:105) + j sin ω i | (cid:105) . The input-output associationof a D -dimensional basic quantum neuron in the proposedQNNM model at a particular epoch ( t ) is modeled as |O tk (cid:105) = T D ( | h tk (cid:105) ) = T ( 2 πD δ tDk − arg( | y tk (cid:105) )) (14)where, | y tk (cid:105) = N (cid:88) i =1 H D ( | θ ti,k (cid:105) ) T D ( |O t − k (cid:105) ) − H D ( | ξ ti (cid:105) ) (15)Here, the quantum phase transformation parameter (weight)between the k th output neuron and the i th input neuron is | θ i,k (cid:105) and the activation is | ξ ti (cid:105) . The D -dimensional Hadamardgate parameters are designated by δ Dk . Considering the basisstate | D − (cid:105) , the true outcome of the quantum neuron k atthe output layer is obtained through quantum measurement of D -dimensional quantum state |O tk (cid:105) as M kQNMM = | Im ( |O tk (cid:105) ) | (16)where, the imaginary section of O tk is referred to as Im ( O tk ) .It is worth noting that the realization mapping T trans-forms quantum states to probability amplitudes and hencethe quantum state is destroyed on implementation in classicalsystems. However, the suggested quantum neural networkmodel is not a quantum neural network in the true sense ofthe term. It is a quantum mechanics-inspired hybrid neuralnetwork model implementable on classical systems.V. Q UANTUM F ULLY S ELF - SUPERVISED N EURAL N ETWORK (QFS-N ET )The suggested quantum fully self-supervised neural networkarchitecture comprises trinity layers of qutrit neurons arrangedas input, intermediate and output layers. A schematic outline ofthe QFS-Net architecture as a quantum neural network modelis illustrated Figure 1. The information processing unit ofthe QFS-Net architecture is depicted using quantum neurons( qutrits ) reflected in the trinity layers using the combinedmatrix notation. | ψ (cid:105) | ψ (cid:105) | ψ (cid:105) . . . | ψ m (cid:105) . . . . . . . . . . . . . . .. . . . . . . . . . . . . . .. . . . . . . . . . . . . . . | ψ n (cid:105) | ψ n (cid:105) | ψ n (cid:105) . . . | ψ nm (cid:105) Hence, each quantum neuron constitutes a qutrit state desig-nated as ψ ij .Each layer of the quantum self-supervised neural networkarchitecture is organized by combining the qutrit neuronsin a fully-connected fashion with intra-connection strengthas π ( qutrit state). The main characteristic of the networkarchitecture lies in the organization of the 8-connected second-order neighborhood subsets of each quantum neuron in thelayers of the underlying architecture and propagation to thesubsequent layers for further processing. The input, intermedi-ate/hidden and output layers are inter-connected through self-forward propagation of the qutrit states in the 8-connectedneighborhood fashion. On the contrary, the inter-connectionsare established from the output layer to intermediate layer en-tailing self-counter-propagation obviating the quantum back-propagation algorithm and thereby reducing time complexity.Finally, a quantum observation process allows the qutrit statesto collapse to one of the basis states ( or as is con-sidered as a temporary state). We obtain true outcome at theoutput layer of the QFS-Net once the network converges, elsequantum states undergo further processing. A. Qutrit-inspired Fully Self-supervised Quantum NeuralNetwork Model
The novel quantum fully self-supervised neural networkmodel based on qutrits adopts twofold schemes. The qutrit neurons of each layer are realized using a T transformationgate (realization mapping) and the inter-connection weightsare mapped using the phase Hadamard gates ( H ) applicableon qutrits . The angle of rotation is set as relative differenceof quantum information (marked by pink arrow in Figure 1)between each candidate qutrit neuron and the neighborhood qutrit neuron of the same layer employed in the rotationgate for updating the inter-layer interconnections. The rotationangle for the inter-connection weights and the threshold areset as ω and γ , respectively. The inter-connection weightsbetween the qutrit neurons (denoted as k and i ) of two adjacentlayers are depicted as | θ ik (cid:105) and measured as the relativedifference between the i th candidate qutrit neuron and the8-connected neighborhood quantum neuron k . The realizationof the network weights are mapped using the Hadamard gate( H ) inspired by the proposed QNNM model by suppressingthe highest basic level ( | (cid:105) ) of qutrit as a temporary storageas H ( | θ ik (cid:105) ) = cos( 2 π ω i,k ) + sin( 2 π ω i,k ) = (cid:20) cos( π ω i,k )sin( π ω i,k ) (cid:21) (17)where, is an imaginary unit. The role of relative measure ofthe quantum fuzzy information lies in the fact that the distinc-tion between the foreground and background image pixels isclearly visible on adapting the relative measures. Assumingthe quantum fuzzy grade information at the i th candidateneuron and its 8-connected second order neighborhood neuronas µ i and µ i,k respectively, the angle of the Hadamard gate isdetermined as ω i,k = 1 − ( µ i − µ i,k ); k ∈ { , , . . . } (18) Fig. 1: qutrit-inspired Quantum Fully Self-Supervised Neural Network (QFS-Net) architecture where H represents Hadamardgate and T is realization gate (only three inter-layer connections are shown for clarity).The 8-fully intra-connected spatially arranged neighborhood qutrit neurons contribute to the candidate quantum neuron (say i (cid:48) ) of the adjacent layer through the transformation gate ( T )and the realization mapping defined as | ψ i (cid:48) (cid:105) = (cid:88) k T ( | µ i,k (cid:105) ) H ( | θ iki (cid:48) (cid:105) ) = (cid:88) k [ µ i,k { cos( 2 π ω i,k ) + sin( 2 π ω i,k ) } ] (19)In addition, the contribution of the 8-fully intra-connected spa-tially arranged neighborhood qutrit neurons are accumulatedat the candidate qutrit neuron as the quantum fuzzy contextsensitive activation ( ξ i ) and is presented using the Hadamardgate as H ( | ξ i (cid:105) ) = cos( 2 π γ i ) + sin( 2 π γ i ) = (cid:20) cos( π γ i )sin( π γ i ) (cid:21) (20)where, the angle of the Hadamard gate is defined as γ i = ( (cid:88) k µ i,k ) (21)The self-supervised forward and counter propagation ofthe QFS-Net are guided by a novel qutrit based adaptivemulti-class Quantum Sigmoid ( QSig ) activation function withquantum fuzzy context sensitive thresholding as discussed inthe following subsection V-C. The basis of network dynamicsof the QFS-Net is centred on the bi-directional self-organizedpropagation of the qutrit states between the intermediate andoutput layers via updating of inter-connection links.The network basic input-output relation is presented throughthe composition of a sequence using the transformation gate( T ) and the realization mapping defined as | ψ lk (cid:105) = QSig ( (cid:88) i =1 T ( ψ l − k,i ) H ( (cid:104) θ li | ξ li (cid:105) )) (22) where, | ψ lk (cid:105) is the output of the k th constituent qutrit neuronat the l th layer and the contribution of each 8-connectedneighborhood qutrit neurons of the k th candidate neuron isexpressed as | ψ l − k (cid:105) i.e. , | ψ lk (cid:105) = T (cid:34) π δ lk − arg { (cid:88) i =1 H ( | θ lk,i (cid:105) ) T ( | ψ l − k,i (cid:105) ) − H ( | ξ lk (cid:105) ) } (cid:35) (23)Quantum observation on a qutrit neuron transforms a quantumstate into a basis state and a true outcome ( | (cid:105) ) is obtained onmeasurement from the qutrit neuron considering the imaginarysection of | ψ lk ) as O lk = | Im ( | ψ lk (cid:105) ) | (24)i.e O lk = QSig ( (cid:88) i =1 T ( ψ l − k,i ) cos( 2 π ω lk,i − γ lk ))+ sin( 2 π ω lk,i − γ lk ))) (25)where, the quantum phase transmission parameter from theinput qutrit neuron i (the neighborhood of k th qutrit neuronat the layer l − is depicted as i ) to intermediate qutrit neuron k with activation ξ lk , is ω lk,i . The rotation gate parameters areexpressed as δ lk with the parameters of activation as γ lk at thelayer l . The activation function employed in the proposed QFS-Net model is a novel adaptive multi-class qutrit embeddedsigmoidal ( QSig ) activation function which is illustrated inthe following subsection V-C.
B. Qutrit-Inspired Self-supervised Learning of QFS-Net
Let us consider, the interconnection weights in terms of qutrit between the input and the hidden or intermediate layerare expressed as | θ lipi (cid:48) (cid:105) (here any candidate qutrit neuron at the input layer is i , its corresponding candidate neuron at thenext subsequent intermediate layer is i (cid:48) and its corresponding8-connected neighborhood neurons are described by p ) andfor the intermediate layer to the output layer are | θ ljqj (cid:48) (cid:105) (hereany candidate qutrit neuron at the intermediate layer is j , itscorresponding candidate neuron at the next subsequent outputlayer is j (cid:48) and its corresponding 8-connected neighborhoodneurons are described by q ) at the l th iteration. The activationat the intermediate layer and output layer are expressed as | ξ lj (cid:105) and | ξ lk (cid:105) , respectively. The self-supervised counter-propagationof the quantum states from output to intermediate layer isperformed through the interconnection weight | θ lkrk (cid:48) (cid:105) (hereany candidate qutrit neuron at the output layer is k , its corre-sponding candidate neuron at the next subsequent intermediatelayer is k (cid:48) and its corresponding 8-connected neighborhoodneurons are described by r ). The outcome of a qutrit neuron( | ψ lk (cid:105) ) at the output layer can be expressed as | ψ lk (cid:105) = QSig ( (cid:88) q =1 T ( | ψ l − jq (cid:105) ) H ( (cid:104) θ ljqj (cid:48) | ξ lk (cid:105) ) = QSig ( (cid:88) q =1 T ( 2 π × QSig ( (cid:88) p =1 ( 2 π x ip ) H ( (cid:104) θ l − ipi (cid:48) | ξ l − j (cid:105) ))) H ( (cid:104) θ ljqj (cid:48) | ξ lk (cid:105) ) (26)i.e., | ψ lk (cid:105) = QSig ( (cid:88) q =1 T ( 2 π × ( QSig ( (cid:88) p =1 ( 2 π x ip )cos( 2 π ω lipi (cid:48) − γ lj )) cos( 2 π ω ljqj (cid:48) − γ lk ))+ sin( 2 π ω lipi (cid:48) − γ lj )) sin( 2 π ω ljqj (cid:48) − γ lk ))))) (27)where, x ip represents the classical input to the neighborhoodneuron p with respect to a candidate neuron i at the inputlayer which is subsequently transformed to a qutrit state( | φ ip (cid:105) = π x ip ) and is an imaginary unit. An adaptive multi-class qutrit embedded sigmoidal ( QSig ) activation functionemployed in this self-supervised network model governs theactivation at the intermediate and output layers and also thesubsequent processing of the quantum states guided by variousthresholding schemes.
C. Adaptive Multi-class Quantum Sigmoidal (QSig) activationfunction
In this paper, we have introduced an adaptive multi-classsigmoidal activation function in quantum formalism suitablefor pixel wise multi-class segmentation of medical imagesvarying with multi-intensity gray-scales. The proposed
QSig activation function is the modification on the recently de-veloped quantum multi-level sigmoid activation function em-ployed in authors’ previous work [39], [40]. An optimizedversion of similar function is also introduced in [41]. How-ever, the requirement of finding optimal thresholding of theimages in the activation function is computationally exhaustiveand time dependent. The proposed
QSig relies on an adaptivestep length incorporating the total number of segmentation levels with various schemes of activation. The
QSig activationfunction, employed in the QFS-Net model is defined as
QSig ( x ) = 1 κ ϑ + e − λ ( xh − η ) (28)where, QSig ( x ) represents the adaptive multi-class quantumsigmoidal ( QSig ) activation function with steepness parameter λ , step size h and activation η described by qutrits . The multi-level class output, κ ϑ as qutrit is defined as κ ϑ = Q N τ ϑ − τ ϑ − (29)The gray-scale intensity index is expressed as κ ϑ ( ≤ κ ϑ ≤ L ) where ϑ is the class index. The ϑ th and ϑ − th classresponses are denoted as τ ϑ and τ ϑ − , respectively and thesum of the containment of -connected neighborhood qutrit neurons representing gray-scale pixels is denoted by Q N . The Q S i g ( x ) λ=15λ=20λ=25 (a) L = 3 Q S i g ( x ) λ=15λ=20λ=25 (b) L = 4 Q S i g ( x ) λ=15λ=20λ=25 (c) L = 6 Q S i g ( x ) λ=15λ=20λ=25 (d) L = 8 Fig. 2: Multi-level class outcome of
QSig activation functionfor λ = 15 , , and h = 1 with segmentation levelsgeneralized version of the QSig activation function definedin Equation 28 can be modified leveraging κ ϑ with varioussubnormal responses σ κ ϑ as qutrit where ≤ σ κ ϑ ≤ π . Themulti-level class output is obtained on superposition of thesubnormal responses and the generic QSig activation functioncan be expressed as
QSig ( x ; κ ϑ , τ ϑ ) = 1 κ ϑ + e − λ ( x − ( ϑ − L +12 ) τ ϑ − − η ) (30)In order to ensure that the number of distinct κ ϑ parametersis to be equal to the number of multi-level classes ( L − ),Equation 31 depicts the closed form of the resultant QSig function as
Qsig R ( x ) = L (cid:88) ϑ =1 Qsig ( x − ( ϑ − L + 12 ) τ ϑ − );( ϑ − L + 12 ) τ ϑ − ≤ x ≤ ϑτ ϑ (31) Substituting Equation 30 in Equation 31, the updated form isexpressed as
QSig R ( x ; κ ϑ , τ ω ) = L (cid:88) ϑ =1 κ ϑ + e − λ ( x − ( ϑ − L +12 ) τ ϑ − − η ) (32)Different forms of the QSig activation function with differentvalues of the steepness parameters are illustrated in Figure2.
D. Updating Inter-connection Weight using Hadamard Gate
The interconnection weights and the activation of QFS-Netarchitecture are updated using a Hadamard gate ( H ) workingon qutrit as follows. H ( | θ ι +1 (cid:105) ) = 1 √ e π (cid:52) ω e − π (cid:52) ω e − π (cid:52) ω e π (cid:52) ω H ( | θ ι (cid:105) ) (33) H ( | ξ ι +1 (cid:105) ) = 1 √ e π (cid:52) γ e − π (cid:52) γ e − π (cid:52) γ e π (cid:52) γ H ( | ξ ι (cid:105) ) (34)where ω ι +1 = ω ι + (cid:52) ω ι (35)and γ ι +1 = γ ι + (cid:52) γ ι (36)The suitable tailoring of the phase angle in the Hadamardgate advocates the stability of the QFS-Net or its convergencewhich is very crucial for self-supervised networks wherethe loss function (here error function) is dependent on theinterconnection weights. Hence, the phase angles are evalu-ated using (cid:52) ω ι and (cid:52) γ ι as given in Equations 18 and 21,respectively. It is worth noting that the qutrit based quantumneural network provides faster convergence compared to theclassical neural networks. This is due to the fact that whereasthe classical neural networks are formed using the multipli-cation of input vector and the weight vector guided by anactivation function, the quantum-based networks incorporatethe frequency components of the weights and their inputsthereby enabling faster convergence of the network states. Thisinherent novel feature of the quantum neural networks facili-tates the qutrit based fully self-organized quantum algorithm tobe employed in QFS-Net to converge super-linearly, as shownin Figure 3. The loss function cum QFS-Net network errorfunction is defined on quantum measurement in the followingway. ζ ( ω, γ ) = 1 N N (cid:88) i (cid:88) k =1 (cid:2) Θ ik ( ω ik , γ i ) ι +1 − Θ ik ( ω ik , γ i ) ι (cid:3) (37)where, Θ ik ( ω ik , γ i ) ι represents the true interconnectionweight terms of the inter-connection weights | θ ιij (cid:105) as expressedusing the Hadamard gate ( H ) at an instance ( ι ). ζ ( ω, γ ) is acoherent error function of ω and γ . Convergence analysis ofthe proposed qutrit -inspired QFS-Net is provided in AppendixSection A and demonstrated experimentally with qubit embed-ded QIS-Net [39] as shown in Figure 3. It can be summarized that the convergence of the QFS-Net is faster than that of theQIS-Net and also follows super-linearity. This claim is alsosubstantiated by the number of iterations required to convergefor each image slice in QFS-Net and QIS-Net as illustrated inFigure 4. Convergence of QIS-Net for class level S1 (a) QIS-Net, S1
Convergence of QIS-Net for class level S2 (b) QIS-Net, S2
Convergence of QIS-Net for class level S3 (c) QIS-Net, S3
Convergence of QIS-Net for class level S4 (d) QIS-Net, S4
Convergence of QFS-Net for class level S1 (e) QFS-Net, S1
Convergence of QFS-Net for class level S2 (f) QFS-Net, S2
Convergence of QFS-Net for class level S3 (g) QFS-Net, S3
Convergence of QFS-Net for class level S4 (h) QFS-Net, S4
Fig. 3: Convergence analyses of the suggested qutrit -inspiredQFS-Net and qubits embedded QIS-Net [39] for four differentactivation schemesVI. R
ESULTS AND D ISCUSSION
A. Data Set
Cancer Imaging Archive (TCIA) data are available fromthe Nature repository [14] and the experiments have beenperformed on the same data sets using the suggested QFS-Net model characterized by qutrits and an adaptive multi-class quantum sigmoidal (
QSig ) activation function. In contrastwith the automatic brain lesion segmentation, four distinctactivation schemes have been tested, and experiments are alsoperformed using Quantum-Inspired Self-supervised Network(QIS-Net) [39], U-Net [18] and URes-Net [20] architectures.The U-Net [18] and URes-Net [20] architectures are trainedwith
MR images and validated and tested on and
Convergence of QIS-Net and QFS-Net for class level S1 (a) η β Convergence of QIS-Net and QFS-Net for class level S2 (b) η χ Convergence of QIS-Net and QFS-Net for class level S3 (c) η ξ Convergence of QIS-Net and QFS-Net for class level S4 (d) η ν Fig. 4: Average number of iterations of each brain slice usingQFS-Net based on qutrit and QIS-Net [39] based on qubits forfour various thresholding schemes (a) η β , (b) η χ , (c) η ξ , (d) η ν using class level S [39]contrast-enhanced Dynamic Susceptibility Contrast (DSC) MRimages, respectively. The QIS-Net and the proposed QFS-Netare also tested on the same number of contrast-enhancedDSC MR images. B. Experimental Setup
In this current work, extensive experiments have beencarried out on
Dynamic Susceptibility Contrast (DSC)brain MR images of Glioma patients from TCIA data sets ofsize × using Nvidia RTX 2070 GPU System with high-performance systems with MATLAB 2020a and Python 3.6.The 2D segmented images are processed through a 2D binarycircular mask to obtain the brain lesion in the suggested QFS-Net framework. The lesion or brain tumor detection mask isbinarized using a threshold of . , and in the case of QFS-Net and QIS-Net [39], it is seen that with a radius of pixels, the segmented ROIs perform optimally while comparedwith the human expert segmented images. Experiments arealso performed on two recently developed CNN architecturessuitable for medical image segmentation viz., convolutionalU-Net [18] and Residual U-Net (URes-Net) [20] available inGitHub. The U-Net and URes-Net networks are rigorouslytrained using the stochastic gradient descent algorithm withlearning rate . and batch size allowing maximum epochs to converge. The segmented output images resemble insize with the dimensions of the binary mask and the outcome is considered as tumor region and as background indetecting complete tumor. The pixel by pixel comparison withthe manually segmented regions of interest or lesion maskallows evaluating the dice similarity, which is considered asa standard evaluation procedure in automatic medical imagesegmentation. The evaluation process involves the manuallysegmented lesion mask as ground truth, and each 2D pixelis predicted as either True Positive ( T RP ) or True Negative( T RN ) or False Positive ( T RN ) or False Negative ( F LN ). The suggested qutrit -inspired fully self-supervised shallowquantum learning model is experimented with the multi-levelgray-scale images using distinct classes L = 4 , , , and characterized by an adaptive multi-class quantum sigmoidal( QSig ) activation function. In this experiment, the steepnessin the
QSig activation, λ is varied in the range . to . with step size . . It has been observed that in majoritycases, λ = 0 . yields optimal performance. The empiricalgoodness measures [Positive Predictive Value ( P P V ), Sensi-tivity ( SS ), Accuracy ( ACC ) and Dice Similarity( DS ) [44]]are assessed to evaluate the experimental outcome using fourthresholding schemes ( η β , η χ , η ξ , η ν ) [39], [45] as discussedin the supplementary materials section for different level sets.The dice score is often used to measure the similarity of thesegmented brain lesions and regions of interest (ROIs). C. Experimental Results
Extensive experiments have been performed in the cur-rent setup, and experimental outcomes are reported with thedemonstration of numerical and statistical analyses using theproposed QFS-Net, QIS-Net [39], convolutional U-Net [18]and Residual U-Net (URes-Net) architectures [20]. The hu-man expert segmented skull-tripped contrast enhanced DSCbrain MR input image slices of size × and ROIsare provided in Figure 5 as samples. The demonstration ofQFS-Net segmented images followed by the essential post-processed outcome on the slice no. for class level L = 8 with four distinct activation schemes ( η β , η χ , η ξ , η ν ) are shownin Figure 6. It is evident from the experimental data providedin Table I that the proposed QFS-Net performs optimallyfor the -connected quantum fuzzy pixel information het-erogeneity assisted activation ( η ξ ) with L = 8 and grayscale set S in comparison with other thresholding schemesand gray scale sets under the four evaluation parameters( ACC, DS, P P V, SS ) [44]. The segmented tumors obtainedusing the proposed self-supervised procedure under L = 8 class transition levels with four different thresholding schemes η β , η χ , η ξ and η ν are demonstrated in Figures 7- 8 for the classboundary sets S and S [39], respectively. The segmentedimages using the remaining two class boundary sets ( S and S ) [39] are provided in the supplementary materials section.The segmented ROIs describing the whole tumor region afterthe masking procedure using QIS-Net, U-Net and URes-Netare also reported in Figure 9. (a) Input Slice (b) Input Slice (c) ROI Slice (d) ROI Slice Fig. 5: Dynamic Susceptibility Contrast (DSC) skull strippedbrain MR images with size × and manually segmentedROI slices [14]Table II presents the numerical results obtained using theproposed QFS-Net and QIS-Net [39] on evaluating the av- erage accuracy ( ACC ), dice similarity score ( DS ), positiveprediction value ( P P V ), and sensitivity ( SS ) as reportedunder L = 8 class transition levels ( S , S , S , S ) [39] withfour distinct thresholding schemes ( η β , η χ , η ξ and η ν ). Theaverage number of iterations required to converge for eachclass boundary set is also reported in Table II. In addition,Table III summarises the results obtained using convolutionalU-Net [18] and Residual U-Net (URes-Net) [20] architecturesfor two distinct convolutional masks with size × and × with stride sizes of and . However, the convolutional basedarchitectures (U-Net and URes-Net) marginally outperformour proposed qutrit-inspired fully self-supervised quantumneural network model QFS-Net and the previously devel-oped QIS-Net [39] based on qubits . The box plots are alsodemonstrated in the supplementary materials section citing theoutcome reported in Tables II and III, respectively. Moreover,to show the effectiveness of our proposed QFS-Net over QIS-Net, U-net and URes-Net, we have conducted one-sided two-sample Kolmogorov-Smirnov (KS) [46] test with significancelevel α = 0 . . It is interesting to note that in spite beinga fully self-supervised quantum learning inspired by qutrits ,the QFS-Net has shown similar accuracy ( ACC ) and dicesimilarity ( DS ) compared with U-Net [18] and URes-Net [20].Hence, it can be concluded, that the performance of the QFS-Net model on Dynamic Susceptibility Contrast (DSC) brainMR images is statistically significant and offers a potentialalternative to the solution of deep learning technologies. (a) η β (b) η χ (c) η ξ (d) η ν (e) η β (f) η χ (g) η ξ (h) η ν (i) η β (j) η χ (k) η ξ (l) η ν (m) η β (n) η χ (o) η ξ (p) η ν Fig. 6: Demonstration of QFS-Net segmented images followedby essential post-processed outcome on the slice no. [14]for class level L = 8 with four distinct activation schemes( η β , η χ , η ξ , η ν ) with class-levels ( a − d ) for S , ( e − h ) for S , ( i − l ) for S ,and ( m − p ) for S [39] (a) η β (b) η χ (c) η ξ (d) η ν (e) η β (f) η χ (g) η ξ (h) η ν (i) η β (j) η χ (k) η ξ (l) η ν Fig. 7: Segmented ROIs describing the complete tumor regionafter the post-processing using the proposed QFS-Net on slice [14] using L = 8 transition levels with four differentthresholding schemes ( η β , η χ , η ξ , η ν ) ( a − e ) with class-level S [39] (a) η β (b) η χ (c) η ξ (d) η ν (e) η β (f) η χ (g) η ξ (h) η ν (i) η β (j) η χ (k) η ξ (l) η ν Fig. 8: Segmented ROIs describing the complete tumor regionafter the post-processing using the proposed QFS-Net on slice [14] using L = 8 transition levels with four differentthresholding schemes ( η β , η χ , η ξ , η ν ) ( a − e ) with class-level S [39] (a) QIS-Net (b) U-Net (c) URes-Net Fig. 9: ROI segmented output slice [14] masking followedby post processing using ( a ) QIS-Net [39] ( b ) U-Net [18] ( c )URes-Net [20] TABLE I: Segmented accuracy, dice similarity score, PPV and sensitivity for the slice [14] using QFS-Net Level Set ACC = T RP + T RN T RP + F LP + T RN + F LN DS = T RP T RP + F LP + F LN PPV = T RP T RP + F LP SS = T RP T RP + F LN η β η χ η ξ η ν η β η χ η ξ η ν η β η χ η ξ η ν η β η χ η ξ η ν L = 4 S .
82 0 .
79 0 .
79 0 . .
65 0 .
65 0 .
65 0 . S . . . .
65 0 .
71 0 .
98 0 . . S . .
78 0 .
82 0 .
71 0 .
72 0 .
65 0 .
71 0 . .
99 0 . S . .
78 0 .
82 0 .
71 0 .
72 0 .
65 0 .
71 0 .
98 0 . . L = 6 S . .
78 0 .
82 0 .
71 0 .
72 0 .
65 0 .
71 0 .
98 0 . . S . .
82 0 .
71 0 . .
71 0 .
98 0 .
89 0 .
89 0 . S . .
78 0 .
82 0 . .
65 0 .
71 0 .
98 0 . . S . .
78 0 . .
72 0 .
65 0 . .
97 0 . L = 8 S .
83 0 .
83 0 .
82 0 .
82 0 .
74 0 .
74 0 .
73 0 .
74 0 .
93 0 . . S .
73 0 .
73 0 .
73 0 .
73 0 . S .
82 0 .
82 0 . S .
82 0 .
82 0 . .
73 0 .
73 0 . . TABLE II: Average performance analyses of QFS-Net and QIS-Net [39] for four distinct class levels and activation [One sidednon-parametric two sample KS test [46] with α = 0 . significance level has been conducted and marked in bold.] Network Set ACC DS PPV SS Avg.
Iterationη β η χ η ξ η ν η β η χ η ξ η ν η β η χ η ξ η ν η β η χ η ξ η ν QFS-Net S .
987 0 .
987 0 . .
783 0 .
782 0 .
788 0 .
713 0 .
695 0 .
691 0 .
698 0 .
955 0 .
954 0 .
957 0 .
957 10 . S .
988 0 . .
776 0 .
773 0 .
697 0 .
696 0 .
679 0 .
679 0 .
957 0 . . S . .
782 0 .
690 0 .
710 0 .
718 0 .
687 0 .
955 0 .
957 0 . . S .
986 0 . .
781 0 .
767 0 . .
694 0 .
676 0 .
693 0 .
713 0 .
954 0 .
955 0 .
954 0 .
957 12 . QIS-Net S .
986 0 .
987 0 .
986 0 .
986 0 .
784 0 .
771 0 .
767 0 .
766 0 .
698 0 .
688 0 .
680 0 .
672 0 .
956 0 .
947 0 . . S .
987 0 .
987 0 .
988 0 .
988 0 .
764 0 .
761 0 .
766 0 .
766 0 .
665 0 .
663 0 .
667 0 . . S .
986 0 .
986 0 .
986 0 .
987 0 .
768 0 .
781 0 .
755 0 .
764 0 .
676 0 .
666 0 .
659 0 .
665 0 .
955 0 .
957 0 . . S .
987 0 .
986 0 .
986 0 .
986 0 .
773 0 .
764 0 .
761 0 .
768 0 .
679 0 .
674 0 .
668 0 . .
954 0 .
955 0 .
957 13 . TABLE III: Performance analyses of U-Net [18] and URes-Net [20] for four distinct class levels and activation [Onesided non-parametric two sample KS test [46] with α = 0 . significance level has been conducted and marked in bold.] Networks Conv-Mask Stride ACC DS PPV SSU-Net × .
717 0 . × .
715 0 . × .
726 0 . × .
718 0 . URes-Net × .
734 0 . × . × . × .
717 0 . VII. C
ONCLUSION
An automated brain tumor segmentation using a fully self-supervised QFS-Net encompassing a qutrit-inspired quantumneural network model is presented in this work. The pixelintensities and interconnection weight matrix are expressed inquantum formalism on classical simulations, thereby reducingthe computational overhead and enabling faster convergence ofthe network states. This intrinsic property of the quantum fullyself-supervised neural network model allows attaining accurateand time-efficient segmentation in real-time. The suggestedQFS-Net achieves high accuracy and dice similarity in spiteof being a fully self-supervised neural network model.The proposed quantum neural network model approach isalso a faithful mapping towards quantum hardware circuit, and it can also be implemented using quantum gates alongwith its classical counterparts. The proposed QFS-Net modeloffers the possibilities of entanglement and superposition inthe network architecture, which are often missing in theclassical implementations. However, it is also worth noting thatthe suggested qutrit-inspired fully self-supervised quantumneural network model is computed and experimented on aclassical system. Hence, the proposed model architecture isnot quantum in a real sense, instead it is quantum-inspired. Itis also worth noting that the QFS-Net is validated solely forcomplete tumor and the network has potential for multi-levelsegmentation which is evident from the segmented brain MRlesions. Nevertheless, it remains an uphill task to optimize thehyper-parameters for obtaining optimal multi-class segmenta-tion. Authors are currently engaged in this direction.R
EFERENCES[1] A. Osterloh, L. Amico, G. Falci, and R. Fazio, “Scaling of entanglementclose to a quantum phase transition,”
Nature , vol. 416, no. 6881, pp.608-–610, 2002.[2] V. Gandhi, G. Prasad, D. Coyle, L. Behera, and T. M. McGinnity,“Quantum neural network-based EEG filtering for a brain-computerinterface,”
IEEE Transaction on Neural Network and Learning Systems ,vol. 25, no. 2, pp. 278-–288, 2014.[3] C. Chen, D. Dong, H. X. Li, J. Chu, and T. J. Tarn, “Fidelity-based prob-abilistic Q-learning for control of quantum systems,”
IEEE Transactionon Neural Network and Learning Systems , vol. 25, no. 5, pp. 920—933,2014.[4] P. Li, H. Xiao, F. Shang, X. Tong, X. Li, and M. Cao, “A hybrid quantum-inspired neural networks with sequence inputs,”
Neurocomputing , vol.117, pp. 81-–90, 2013. [5] T. C. Lu, G. R. Yu, and J. C. Juang, “Quantum-based algorithm for opti-mizing artificial neural networks,” IEEE Transaction on Neural Networkand Learning Systems , vol. 24, no. 8, pp. 1266—1278, 2013.[6] S. Bhattacharyya, P. Pal and S. Bhowmick, “Binary Image DenoisingUsing a Quantum Multilayer Self Organizing Neural Network,”
AppliedSoft Computing , vol. 24, pp. 717–729, 2014.[7] D. Konar, S. Bhattacharya, B. K. Panigrahi, and K. Nakamatsu “Aquantum bi-directional self-organizing neural network (QBDSONN)architecture for binary object extraction from a noisy perspective,”
Ap-plied Soft Computing , vol.46, pp. 731–752, 2016.[8] D. Konar, S. Bhattacharya, U. Chakraborty, T. K.Gandhi, and B. K. Pan-igrahi, “A quantum parallel bi-directional self-organizing neural network(QPBDSONN) architecture for extraction of pure color objects fromnoisy background,”
Proc. IEEE International Conference on Advances inComputing, Communications and Informatics (ICACCI), 2016 , pp. 1912–1918, 2016.[9] M. Schuld, I. Sinayskiy, and F. Petruccione, “The quest for a QuantumNeural Network,”
Quantum Informaton Processing , vol. 13, pp. 2567-–2586, 2014.[10] A. Kapoor, N. Wiebe, and K. Svore, “Quantum Perceptron Models,”
Advanced Neural Information Processing Systems (NIPS 2016) , vol. 29,pp. 3999-–4007, 2016.[11] A. Narayanan, and T. Menneer, “Quantum artificial neural networkarchitectures and components,”
Information Sciences , vol. 128, no. (3-4), pp. 231-–255, 2000.[12] C. Y. Liu, C. Chen, C. T. Chang and L. M. Shih, “Single-hidden-layerfeed-forward quantum neural network based on Grover learning,”
NeuralNetworks , vol. 45, pp. 144–150, 2013.[13] M. C. Clark, L. O. Hall, D. B. Goldgof, R. Velthuizen, F. R. Murtaghand M. S. Sil-biger, “Automatic tumor segmentation using knowledge-based techniques,”
IEEE Transaction on Medical Imaging , vol.17, no.2pp. 187–201, 1998.[14] K. M. Schmainda, M. A. Prah, J. M. Connelly, and S. D. Rand, “GliomaDSC-MRI Perfusion Data with Standard Imaging and ROIs,”
The CancerImaging Archive , DOI: 10.7937/K9/TCIA.2016.5DI84Js8.[15] C-H. Lee, S. Wang, A. Murtha, M. R. G. Brown, andR. Greiner,“Segmenting brain tumors using pseudo-conditional randomfields,”
Medical Image Computing and Computer-Assisted Intervention– MICCAI 2008. New York: Springer , pp. 359-–366, 2008.[16] D. Zikic, B. Glocker, E. Konukoglu, J. Shotton, A. Criminisi, D. H. Ye,C. Demiralp, O. M. Thomas, T. Das, R. Jena and S.‘J. Price, “Contextsensitive classification forests for segmentation of brain tumor tissues,”
Med. Image Com put. Comput.-Assisted lntervention Conf.-Brdin tumorSegmentation Challenge , Nice, France, 2012.[17] D. Zikic et al. , “Segmentation of brain tumor tissues with convolutionalneural networks,”
MICCAI Multimodal Brain tumor Segmentation Chal-lenge (BraTS) , pp. 36-–39, 2014.[18] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networksfor biomedical image segmentation,”
In International Conference onMedical image computing and computer-assisted intervention , pp. 234-241. Springer, 2015.[19] A. Brebisson and G. Montana, “Deep neural networks for anatomicalbrain segmentation,”
In Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition Workshops , pp. 20–28, 2015.[20] R. Guerrero, C. Qin, O. Oktay, C. Bowles, L. Chen, R. Joules, R. Wolz, et al. , “White matter hyperintensity and stroke lesion segmentationand differentiation using convolutional neural networks,”
NeuroImage:Clinical , vol. 17, pp. 918–934, 2018.[21] S. Pereira, A. Pinto, V. Alves, and C. A. Silva, “Brain tumor Seg-mentation Using Convolutional Neural Networks in MRI Images,”
IEEETransactions on Medical Imaging , vol.35, no. 5, 2016.[22] G. Wang, “Interactive Medical Image Segmentation Using Deep Learn-ing With Image-Specific Fine Tuning,”
IEEE Transactions on MedicalImaging , vol. 37, no. 7, 2018.[23] X. Zhuang, Y. Li, Y. Hu, K. Ma, Y. Yang, and Y. Zheng, “Self-supervisedFeature Learning for 3D Medical Images by Playing a Rubik’s Cube,”
Medical Image Computing and Computer Assisted Intervention – MICCAI2019 , pp. 420–428, 2019.[24] E. C. Behrman, J. E. Steck, P. Kumar, and K. A. Walsh, “Quantumalgorithm design using dynamic learning,”
Quantum Inf. Comput. , vol. 8,pp. 12—29, 2008.[25] S. Kak, “On quantum neural computing,”
Information Sciences , vol.83,pp. 143–160, 1995.[26] D. Ventura and T. Martinez, “An artificial neuron with quantum mechan-ical properties,”
Proc. Intl. Conf. Artificial Neural Networks and GeneticAlgorithms , pp. 482–485, 1997. [27] E. C. Behrman, J. Niemel, J. E. Steck, and S. R. Skinner, “A quantumdot neural network,” in Proc. 4th Workshop Phys. Comput. (PhysComp) ,pp. 22—24, 1996.[28] E. C. Behrman, N. H. Nguyen, J. E. Steck, and M. McCann, “Quantumneural computation of entanglement is robust to noise and decoherence,”
Quantum Inspired Computational Intelligence: Research and Applica-tions, S. Bhattacharyya, Ed. Amsterdam, The Netherlands: Elsevier , pp.3–33, 2016.[29] N. H. Nguyen, E. C. Behrman, and J. E. Steck, “Quantum learn-ing with noise and decoherence: A robust quantum neural network,” arXiv:1612.07593 , 2016.[Online]. Available: https://arxiv.org/abs/1612.07593[30] E. C. Behrman and J. E. Steck, “Multiqubit entanglement of a generalinput state,”
Quantum Inf. Comput. , vol. 13, pp. 36—53, 2013.[31] Nam-H. Nguyen, E. C. Behrman, A. Moustafa, and J. E. Steck,“Benchmarking Neural Networks For Quantum Computations,”
IEEETransactions on Neural Networks and Learning Systems , pp. 1-10, 2019,DOI:10.1109/TNNLS.2019.2933394.[32] G. Purushothaman, and N. B. Karayiannis, “Quantum neural networks(QNNs): inherently fuzzy feed forward neural networks,”
IEEE Transac-tions on Neural Networks , vol. 8 , no. 3, 1997.[33] F. Tacchino, C. Macchiavello, D. Gerace, and D. Bajoni, “An artificialneuron implemented on an actual quantum processor,”
Quantum Infor-mation , vol. 5, no. 26, 2019.[34] R. Schützhold, “Pattern recognition on a quantum computer,” arXiv:0208063 , 2002. [Online]. Available: https://arxiv.org/abs/quantph/0208063.[35] C. A. Trugenberger, “Quantum pattern recognition”, arXiv:0210176v2 .2002, [Online]. Available: https://arxiv.org/abs/quant-ph/0210176v2.[36] N. Masuyama, C. K. Loo, M. Seera, and N. Kubota, “Quantum-InspiredMultidirectional Associative Memory With a Self-Convergent IterativeLearning,”
IEEE Transaction on Neural Network and Learning Systems ,vol. 29, no. 4, pp. 1058—1068, 2018, DOI: 10.1109/TNNLS.2017.2653114.[37] A. Ghosh, N. R. Pal, and S. K. Pal, “Self organization for objectextraction using a multilayer neural network and fuzziness measures,”
IEEE Transactions on Fuzzy Systems , vol. 1, no.1, pp. 54–68, 1993.[38] S. Bhattacharyya, P. Dutta and U. Maulik, “Binary object extraction us-ing bi-directional self-organizing neural network (BDSONN) architecturewith fuzzy context sensitive thresholding,”
Pattern Anal Applic. , vol. 10,pp. 345–360, 2007.[39] D. Konar, S. Bhattacharyya, T. K. Gandhi and B. K. Panigrahi, “Aquantum-inspired self-supervised Network model for automatic segmen-tation of brain MR images,”
Applied Soft Computing , vol. 93, 2020, DOI:https://doi.org/10.1016/j.asoc.2020.106348.[40] D. Konar, S. Bhattacharyya and B. K. Panigrahi, “QIBDS Net:A Quantum-Inspired Bi-Directional Self-supervised Neural NetworkArchitecture for Automatic Brain MR Image Segmentation,” proc. 8th In-ternational Conference on Pattern Recognition and Machine Intelligence(PReMI 2019) , vol. 11942, pp. 87–95, 2019.[41] D. Konar, S. Bhattacharyya, S. Dey, and B. K. Panigrahi, “Opti-QIBDSNet: A Quantum-Inspired Optimized Bi-Directional Self-supervisedNeural Network Architecture for Automatic Brain MR Image Segmenta-tion,”
Proc. 2019 IEEE Region 10 Conference (TENCON) , pp. 761–766,2019.[42] P. Gokhale, J. M. Baker, C. Duckering, N. C. Brown, K. R. Brown, andF. Chong, “Asymptotic improvements to quantum circuits via qutrits,”
ISCA ’19: Proceedings of the 46th International Symposium on Com-puter Architecture , pp. 554-–566, 2019, https://doi.org/10.1145/3307650.3322253.[43] S. Çorbaci, M. D. Karaka¸s and A. Gençten, “Construction of two qutritentanglement by using magnetic resonance selective pulse sequences,”
Journal of Physics: Conference Series , vol. 766, no. 1, 2014.[44] A. P. Zijdenbos, B. M. Dawant, R. A. Margolin, and A. C. Palmer,“Morphometric analysis of white matter lesions in MR images: methodand validation,”
IEEE transactions on Medical Imaging , vol. 13, no. 4,pp. 716–724, 1994.[45] S. Bhattacharyya, P. Dutta and U. Maulik, “Multilevel image segmen-tation with adaptive image context based thresholding,”
Applied SoftComputing , vol. 11, no.1, pp. 946–962, 2011.[46] M. H. Gail and S. B. Green, “Critical values for the one-sided two-sample Kolmogorov–Smirnov statistic,”
J. Am. Stat. Assoc. , vol. 71, pp.757–760, 1976. A PPENDIX
A. Convergence analysis of QFS-Net
Let us consider the optimal phase angles for the weightedmatrix and the activation are denoted as ω and γ , respectivelyand defined as follows: υ ι = ω ι − ω (38) µ ι = γ ι − γ (39)and δ ι = ω ι +1 − ω ι = υ ι +1 − υ ι (40) ρ ι = γ ι +1 − γ ι = µ ι +1 − µ ι (41)Also, the derivative of the loss function ζ ( ω, γ ) with respectto ω, γ is depicted as follows. ∂ζ ( ω, γ ) ∂ω ik = 2 N N (cid:88) i (cid:88) k =1 (cid:52) Θ ik ( ω ik , γ ik ) ι (cid:20) ∂ Θ ik ( ω ik , γ ik ) ι +1 ∂ω ik − ∂ Θ ik ( ω ik , γ ik ) ι ∂ω ik (cid:21) (42) ∂ζ ( ω, γ ) ∂γ i =2 N N (cid:88) i (cid:52) Θ i ( ω i , γ i ) ι (cid:20) ∂ Θ i ( ω i , γ i ) ι +1 ∂γ i − ∂ Θ i ( ω i , γ i ) ι ∂γ i (cid:21) (43)where (cid:52) Θ ik ( ω i , γ ik ) ι = | Θ ik ( ω ik , γ i ) ι +1 − Θ ik ( ω ik , γ i ) ι | (44)and Θ ik ( ω ik , γ i ) ι = [ Im ( H{(cid:104) θ ιik | ξ ιi (cid:105)} )] =[ Im (cos( ω ik − γ i ) ι + sin( ω ik − γ i ) ι )] (45)The change in phase or angles ( (cid:52) ω and (cid:52) γ ) in the Hadamardgate are evaluated using the following equations. (cid:52) ω ιik = − σ ik { ∂ζ ( ω, γ ) ι ∂ω ιik ζ ( ω, γ ) ι } t (46) (cid:52) γ ιi = − σ i { ∂ζ ( ω, γ ) ∂γ ιi ζ ( ω, γ ) ι } t (47)where, the learning rate for the self-supervised updating of theweights in QFS-Net is denoted as σ ik . It is computed using therelative difference between the candidate and its neighborhood qutrit neurons (intensities) with t > as σ ik = µ i − µ ik ∀ k = 1 , . . . (48)Similarly, the learning rate for updating the activation isdenoted as σ i and is equal to the quantum fuzzy contributionof the candidate neuron ( µ i ). The conditions for the super-linear convergence of the sequences of { ω ι } and { γ ι } can beformulated as [1] lim ι →∞ || ω ι +1 − ω |||| ω ι − ω || ≤ (49)and || υ ι +1 || = O || δ ι || (50) Also, lim ι →∞ || γ ι +1 − γ |||| γ ι − γ || ≤ (51)and || µ ι +1 || = O || ρ ι || (52)In order to prove the convergence of the sequences of { ω ι } and { γ ι } , according to Thaler theorem, we obtain ζ ( ω ι +1 , γ ι +1 ) − ζ ( ω ι , γ ι ) = (53) (cid:2) (cid:52) ω ιik (cid:52) γ ιi (cid:3) (cid:34) ∂ζ ( ω,γ ) ι ∂ω ιik ∂ζ ( ω,γ ) ι ∂γ ιik (cid:35) + O (cid:2) ||(cid:52) ω ιik (cid:52) γ ιi || (cid:3) ≈ (cid:20) {− σ ik ∂ζ ( ω, γ ) ι ∂ω ιik } + {− σ i ∂ζ ( ω, γ ) ι ∂γ ιik } (cid:21) { ζ ( ω ι , γ ι ) } t (54)Hence, ( ζ ( ω ι +1 , γ ι +1 ) − ζ ( ω ι , γ ι )) ≤ and it is clearlyevident that the sequences of { ω ι } and { γ ι } are monotonicallydecreasing. The coherent nature of these two sequence leadsto the following. lim ι →∞ ζ ( ω ι , γ ι ) = ( ω, γ ) (55)The rapid convergence of the iteration sequences { ω ι } and { γ ι } are due to lim ι →∞ || ζ ( ω ι +1 , γ ι +1 ) − ( ω, γ ) |||| ζ ( ω ι , γ ι ) − ( ω, γ ) || ≤ (56)The super-linear convergence of the sequences can be shownas follows.Let G ω = ∂ζ ( ω,γ ) ι ∂ω ιik , then || ω ι +1 |||| δ ι || = || ω ι +1 − ω |||| − σ ik { ∂ζ ( ω,γ ) ι ∂ω ιik ζ ( ω, γ ) ι } t || ≥ || ω ι +1 − ω || σ ik G ω { ζ ( ω, γ ) ι } t (57)Hence, || ω ι +1 − ω || = O ( { ζ ( ω, γ ) ι } t } ) (58)Consequently, || ω ι +1 || = O ( || δ ι || ) (59)which proves that the convergence behavior of the iterationsequence { ω ι } is super-linearly convergent.Similarly, let G γ = ∂ζ ( ω,γ ) ι ∂γ ιik , then || γ ι +1 |||| ρ ι || = || γ ι +1 − γ |||| − σ i { ∂ζ ( ω,γ ) ι ∂γ ιi ζ ( ω, γ ) ι } t || ≥ || γ ι +1 − γ |||| σ i G γ { ζ ( ω, γ ) ι } t (60)Hence, || γ ι +1 − γ || = O ( { ζ ( ω, γ ) ι } t } ) (61)Consequently, || γ ι +1 || = O ( || ρ ι || ) (62)which proves that the convergence behavior of the iterationsequence { γ ι } is super-linearly convergent.R EFERENCES[1] L. J. Zhen, X. G. He, and D. S. Huang,“Super-linearly convergent BPlearning algorithm for feed forward neural networks,”