Towards Analog Memristive Controllers
11 Towards Analog Memristive Controllers
Gourav Saha, Ramkrishna Pasumarthy, and Prathamesh Khatavkar
Abstract —Memristors, initially introduced in the 1970s, havereceived increased attention upon successful synthesis in 2008.Considerable work has been done on its modeling and appli-cations in specific areas, however, very little is known on thepotential of memristors for control applications. Being nanoscopicvariable resistors, one can consider its use in making variablegain amplifiers which in turn can implement gain scheduledcontrol algorithms. The main contribution of this paper isthe development of a generic memristive analog gain controlframework and theoretic foundation of a gain scheduled robust-adaptive control strategy which can be implemented using thisframework. Analog memristive controllers may find applicationsin control of large array of miniaturized devices where robustand adaptive control is needed due to parameter uncertainty andageing issues.
Index Terms —BMI Optimization, Control of Miniaturized De-vices, Gain Scheduling, Memristor, Robust and Adaptive Control
I. I
NTRODUCTION M EMRISTOR [1], considered as the fourth basic circuitelement, remained dormant for four decades until theaccidental discovery of memristance in nanoscopic crossbar ar-rays by a group of HP researchers [2]. Memristor, an acronymfor memory-resistor, has the capability of memorizing itshistory even after it is powered off. This property makes ita desirable candidate for designing high density non-volatilememory [3]. However, optimal design of such hardware ar-chitectures will require accurate knowledge of the nonlinearmemristor dynamics. Hence, considerable effort has beenchanneled to mathematically model memristor dynamics (see[2], [4], [5], [6]). The memorizing ability of memristor has leadresearchers to think about its possible use in neuromorphicengineering. Memristors can be used to make dense neuralsynapses [7] which may find applications in neural networks[8], character recognition [9], emulating evolutionary learning(like that of Amoeba [10]). Other interesting application ofmemristor may include its use in generating higher frequencyharmonics which can be used in nonlinear optics [11] and itsuse in making programmable analog circuits [12], [13].Memristor is slowly attracting the attention of the controlcommunity. Two broad areas have received attention: Con-trol of Memristive Systems. This include works reported in[14], [15] give detailed insight into modelling and controlof memristive systems in Port-Hamiltonian framework whileState-of-the-Art work may include [16] which studies globalstability of Memristive Neural Networks. Control usingMemristive Systems. The very first work in this genre wasreported in [17] where the author derived the describing func-tion of a memristor which can be used to study the existence
The authors are with the Department of Electrical Engineering, IndianInstitute of Technology, Madras, India. e-mail: {ee13s005, ramkrishna,ee13s007}@ee.iitm.ac.in ? Undoped
FluxQ IVVoltageCharge Current d Q / d t = I d Q / d V = C dV / dI = R _+ _ + _ a) b) V Aw D
Doped R onwD ΦdΦ / dQ = M d Φ / d I = L d Φ / d t = V R off (cid:18) − wD (cid:19) Figure 1. a) Memristor as a missing link of Electromagnetic Theory. Adaptedfrom [4]. b) Memristor as a series resistor formed by doped and undopedregions of
T iO . Adapted from [2]. of undesirable limit cycles (i.e. sustained oscillations) in aclosed loop system consisting of memristive controller andlinear plant. Another line of work may include [18], [19] whichuses memristor as a passive element to inject variable dampingthus ensuring better transient response. In this paper we lay thegroundwork to use memristor as an analog gain control (AGC)element for robust-adaptive control of miniaturized systems. Why "Analog Memristive Controller"?:
Several appli-cations needs controlling an array of miniaturized devices.Such devices demands Robust-Adaptive Control due to param-eter uncertainty (caused by design errors) and time varyingnature (caused by ageing effect). Robust-Adaptive Controlalgorithms found in literature are so complex that they requiremicrcontroller for implementation. This poses scalability andintegration issues [20] (page 190) because microcontroller initself is a complicated device. The motive of this work is twofolded: 1) Invoke a thought provoking question: “Can moderncontrol laws (like Robust-Adaptive control) be implementedusing analog circuits ?”. 2) Suggest memristor as an AGCelement for implementing Robust-Adaptive Control.The paper is organized as follows. We begin our study bygaining basic understanding of memristor in Section II. Ageneric gain control architecture using memristor is proposedin Section III. Section IV discusses a robust-adaptive controlstrategy which can be implemented in an analog framework.Section V deals with designing an analog controller for aminiaturized setup by using results from Section III and IV. A microcontroller (or a few microcontrollers) may be used to controlan array of miniaturized devices by using Time Multiplexing. In timemultiplexing a microcontroller controls each element of the array in a cyclicfashion. In such a scenario a microcontroller will face a huge computationalload dependent on the size of the arrray. Upcoming ideas like “event-basedcontrol” promises to reduce the computational load by allowing aperodicsampling. The motive of this work is not to challenge an existing idea but topropose an alternative one. CMOS based hybrid circuits like that proposed in [21] can also act likevariable gain control element. However memristors are much smaller (foundbelow nm ) than such hybrid circuits and hence ensures better scalability.Gain-Span of memristor is also more than CMOS based hybrid circuits. a r X i v : . [ m a t h . O C ] J u l Normalized Channel Length wD W i n d o w F u n c t i o n f ! w D " w l D w h D Effective Normalized Channel Length=(0.91−0.08)=0.83
Figure 2. Plot of jogelkar window function with p = 8 for HP memristor. Fora tolerance of in f (cid:0) wD (cid:1) the safe zone of operation is wD ∈ [0 . , . . II. M
EMRISTOR P RELIMINARIES
Chua [1] postulated the existence of memristor as a missinglink in the formulation of electromagnetism. Fig. 1a) givesan intuitive explanation of the same. As evident from Fig.1a), such an element will link charge Q and flux Φ , i.e. Φ = f ( Q ) . Differentiating this relation using chain rule andapplying Lenz’s Law yields V = M ( Q ) I (1)suggesting that the element will act like a charge controlledvariable resistor with M ( Q ) = dfdQ as the variable resistance.This device has non-volatile memory ([1], [4]) and are hencecalled memristors. Memristive systems [22] are generalizationof memristor with state space representation, ˙ W = F ( W , I ) ; V = R ( W , I ) I (2)where, W are the internal state variables, I is the inputcurrent, V is the output voltage, R ( W , I ) is the memristance.Memristor reported by HP Labs in [2] is essentially a mem-ristive system with the following state space representation, ˙ w = µ R on D f (cid:0) wD (cid:1) I ; V = (cid:2) R on wD + R off (cid:0) − wD (cid:1)(cid:3) I (3)It consists of two layers: a undoped region of oxygen-rich T iO and doped region of oxygen-deficient T iO − x . Dopedregion has channel length w and low resistance while undopedregion has channel length D − w and high resistance. Theseregions form a series resistance as shown in Fig. 1b. R on and R off are the effective resistance when w = D and w = 0 respectively with R off (cid:29) R on . µ is the ion mobility. Whenexternal bias is applied, the boundary between the two regionsdrift. This drift is slower near the boundaries, i.e. ˙ w → as w → or w → D . This nature is captured in [4], [5]using window function f (cid:0) wD (cid:1) . A plot of Joglekar windowfunction f (cid:0) wD (cid:1) = 1 − (cid:0) wD − (cid:1) p is shown in Fig. 2. Aclose investigation of various proposed models [2], [4], [5],[6] reveals two important facts: 1) As shown in Fig. 2, f (cid:0) wD (cid:1) is approximately , except near the boundaries. 2) Boundarydynamics of memristor is highly non-linear and still a matterof debate. Hence the region w ∈ [ w l , w h ] , < w l < w h < D ,where f (cid:0) wD (cid:1) ≈ is the safe zone in which memristordynamics can be approximated as ˙ Q M = I ; V = (cid:2) R Soff − α S Q M (cid:3) I (4) where, α S = ( R Soff − R Son ) Q SM , R Soff = R off − ( R off − R on ) w l D , R Son = R off − ( R off − R on ) w h D , D S = w h − w l , Q SM = D S DµR on . Superscript “S” means “safe”. In equation (4) we define Q M such that Q M = 0 when w = w l . Then Q M = Q SM when w = w h . Hence equation (4) is valid when Q M ∈ (cid:2) , Q SM (cid:3) .From now on, the following conventions will be used:1) Memristor would mean a HP memristor operating in safezone. Memristor dynamics is governed by equation (4).2) The schematic symbol of the memristor shown in Fig.1a will be used. Conventionally, resistance of memristordecreases as the current enters from the port marked "+".3) HP memristor parameters: R on = 100Ω , R off = 16 k Ω , D = 10 nm , µ = 10 − m sV . We model the memristorusing Joglekar Window Function with p = 8 . The safezone is marked by w l = 0 . D and w h = 0 . D (seeFig. 2) in which f (cid:0) wD (cid:1) ≥ . . This gives: R Soff =15 k Ω , R Son = 1 . k Ω , Q SM = 83 µC , α S = 1 . × C .These parameters will used for design purposes.III. A NALOG G AIN C ONTROL F RAMEWORK
In this section we design a AGC circuit whose input-outputrelation is governed by the following equation, ˙ K = α k V C ( t ) ; V u ( t ) = KV e ( t ) (5)We assume that K > . This is basically a variable gainproportional controller with output voltage V u , error voltage V e and variable gain K . K is controlled by control voltage V C . α k determines the sensitivity of V C on K . An analogcircuit following equation (5) is generic in the sense thatit can be used to implement any gain scheduled controlalgorithm. We assume V C and V e are band limited, i.e. - themaximum frequency component of V C and V e are ω MC and ω Me respectively. Knowledge of ω MC and ω Me is assumed.The proposed circuit is shown in Fig. 3. We assume theavailability of positive and negative power supply V DD and − V DD . All op-amps are powered by V DD and − V DD . Claim 1:
The proposed circuit shown in Fig. 3. is an approximate analog realization of equation (5) if:1) Electrical components in Fig. 3. are ideal. Also forMOSFET’s, threshold voltage V th ≈ .2) ω m = 1000 max (cid:8)(cid:0) ω MC + ω Me (cid:1) , ω MC (cid:9) R f C f = ω m ; R e C e = ( ω MC + ω Me ) We understand the working of the proposed circuit bystudying its four distinct blocks and in the process prove theabove claim. The output response of each block for a giveninput, V e and V C , is shown in Fig. 4. It should be noted thatthe tuning rules proposed in the claim is only one such set ofvalues which will make the circuit follow equation (5). Remark 1:
Substrate of all NMOS and PMOS are connectedto − V DD and V DD respectively. In ON state , voltages be-tween − V DD to ( V DD − V th ) will pass through NMOS and V DD and − V DD are the highest and the lowest potential available in thecircuit. From control standpoint, it imposes bounds on control input V u . With a slight abuse of terminology, a NMOS and a PMOS is said to bein ON state when its gate voltage is V DD and − V DD respectively. A voltage is said to pass through a MOSFET if the exact voltage appliedat its source(drain) terminal appears at its drain(source) terminal. + _ M High Pass Filter Envelope DetectorCharge Saturator Block
Integrator +_ +− MemristorGain Block
VoltageFollower −+ −+ + −+ − −+ O V e R I R f R f D D T T C R C C f T C T C T C D T T D T sin ( ω m t ) V mC V u V C V m V f V IG C s R s V ED R e C e C H V L V L O Figure 3. Memristive AGC Circuit. V e and V C are the inputs. V u , V L and V L are the outputs. V L and V L are zone indicating voltages. voltages between − ( V DD − V th ) to V DD will pass throughPMOS. If V th ≈ , any voltage between − V DD to V DD willpass through NMOS and PMOS when they are in ON state. A. Memristor Gain Block
The key idea of this block has been adapted from [23]. V mC and V e are inputs to this block while V m is the output. For nowwe assume V mC = V C (details discussed in III-D). Current I m through memristor is I m ( t ) = V e ( t ) sin ( ω m t ) R I (cid:124) (cid:123)(cid:122) (cid:125) I noise + V C ( t ) R C (cid:124) (cid:123)(cid:122) (cid:125) I cntrl From equation (4), resistance of the memristor is given by M ( Q M ) = R Soff − α S Q M . Differentiating this relation weget, ˙ M = − α S I m ( t ) . Hence voltage V m is given by ˙ M = − α S I m ( t ) ; V m ( t ) = − I m ( t ) M (6)Note that ω m = 1000 max (cid:8)(cid:0) ω MC + ω Me (cid:1) , ω MC (cid:9) > ω Me .In such a case the minimum component frequency of I noise = V e ( t ) sin( ω m t ) R I is ω m − ω Me . Now ω m − ω Me = 1000 max (cid:8)(cid:0) ω MC + ω Me (cid:1) , ω MC (cid:9) − ω Me ≥ (cid:0) ω MC + ω Me (cid:1) − ω Me = 1000 ω MC + 999 ω Me (cid:29) ω MC This implies that the lowest component frequency of I noise is much greater than the highest component frequency of I cntrl . Also note that ˙ M = − α S I m ( t ) in an integrator withinput I m ( t ) and output M ( t ) . As integrator is a low passfilter, the effect of high frequency component I noise on M is negligible compared to I cntrl . Hence equation (6) can bemodified as ˙ M ≈ − α S R C V C ( t ) ; V m ( t ) = V m + V m where, (7) V m = − MV e ( t ) R I sin ( ω m t ) and V m = − MV C ( t ) R C . Note V m isthe modulated form of the desired output with gain K = MR I . Remark 2: M ( t ) is the variable gain. According to equation(5), V e should not effect M ( t ) . Without modulating V e , theeffect of V e ( t ) on M ( t ) would not have been negligible. B. High Pass Filter (HPF)
The role of HPF is to offer negligible attenuation to V m and high attenuation to V m thereby ensuring that the EnvelopeDetector can recover the desired output.Note that M ( t ) is basically the integral of V C ( t ) . Sinceintegration is a linear operation it does not do frequency trans-lation. Hence the component frequencies of M ( t ) and V C ( t ) are same. Let ω m and ω M denote the minimum componentfrequency of V m and maximum component frequency of V m respectively. Now ω m = ω m − Maximum Frequency Component of
M V e = ω m − Maximum Frequency Component of V C V e = ω m − (cid:0) ω MC + ω Me (cid:1) (8) ω M = Maximum Frequency Component of
M V C = Maximum Frequency Component of ( V C ) = 2 ω MC (9)The attenuation offered by the HPF is given by −
10 log (cid:16) ωR f C f ) − (cid:17) dB . We want to study the atten-uation characteristics at two frequencies:1) At ω = ω m : At this frequency ω m R f C f = 100 (cid:0) ω m − (cid:0) ω MC + ω Me (cid:1)(cid:1) ω m = 100 (cid:18) − ω MC + ω Me ω m (cid:19) ≥ (cid:18) − (cid:19) ≈ (10)Inequality (10) is possibe because ω m =1000 max (cid:8)(cid:0) ω MC + ω Me (cid:1) , ω MC (cid:9) implying ω m ω MC + ω Me ≥ .For ω m R f C f ≥ attenuation is approximately dB .As the HPF offers almost no attenuation to the minimumcomponent frequency of V m , it will offer no attenuation tothe higher component frequency of V m as well. Hence V m suffers negligible attenuation.2) At ω = ω M : At this frequency ω M R f C f = 100 2 ω MC ω m ≤ . (11)Inequality (11) is possibe because ω m =1000 max (cid:8)(cid:0) ω MC + ω Me (cid:1) , ω MC (cid:9) implying ω m ω MC ≥ .For ω M R f C f ≤ . attenuation is more than − dB . Asthe HPF offers high attenuation to the maximum componentfrequency of V m , it will offer higher attenuation to thelower component frequency of V m . Hence V m gets highlyattenuated.As V m undergoes almost no attenuation ( ≈ dB ), the outputof HPF is V f = − V m . The minus sign before V m is justifiedas the HPF is in inverting mode. C. Envelope Detector
The input to this block is V f = MV e ( t ) R I sin ( ω m t ) . Similarto amplitude modulation (AM), here sin ( ω m t ) is the carrierand MV e ( t ) R I is the signal to be recovered. We use a polaritysensitive envelope detector as MV e ( t ) R I can be both positive ornegative. The key idea used here is that the polarity of V e and V u is same since K = MR I > . Hence we detect the positivepeaks of V f when V e is positive by keeping T ON and T C OF F . When V e is negative, negative peaks of V f are detectedby keeping T C ON and T OF F . Remaining working ofthe envelope detector is similar to a conventional Diode-basedEnvelope Detector and can be found in [24]. Effective envelopedetection using diode based envelope detector requires:1) ω m (cid:29) ω MC + ω Me , i.e. the frequency of the carriershould be much greater than the maximum componentfrequency of the signal. Here sin ( ω m t ) is the carrierwhose frequency is ω m while MV e R I is the signal whosemaximum component frequency is ω MC + ω Me (referequation (8)).2) ω m (cid:28) R e C e , i.e. the time constant of the envelopedetector is much larger than the time period of themodulating signal. This ensures that the capacitor C e (refer Fig. 3) discharges slowly between peaks therebyreducing the ripples.3) R e C e < ω MC + ω Me , i.e. the time constant of the envelopedetector should be less than the time period correspond-ing to the maximum component frequency of the signalgetting modulated. This is necessary so that the output ofthe envelope detector can effectively track the envelopeof the modulated signal.The proposed tuning rule in Claim 1 satisfies these condi-tions. In general, for amplitude modulation (AM) the choice ofmodulating frequency is times the maximum componentfrequency of the signal getting modulated. Unlike AM, ourmultiplying factor is instead of . The reason forchoosing this should be clearly understood. In a conventionaldiode based peak detector the ripple in output voltage canbe decreased either by increasing the modulating frequency ω m or by increasing the time constant R e C e . But R e C e isupper bounded by ω MC + ω Me . In our case the signal to bemodulated, i.e. MV e R I , may contain frequencies anywhere inthe range (cid:2) , ω MC + ω Me (cid:3) . For a given R e C e a signal with alower frequency will suffer higher ripple. Hence to constrainthe ripple for any frequency in the specified range one mustconstrain the ripple for frequency (DC voltage). For DCvoltage ripple factor is approximately π × √ ω m R e C e . With thechoice of R e C e made in Claim 1 and as multipyingfactor, ripple factor is as large as . . Therefore, we choosemultiplying factor of which gives a ripple factor of . . The output of the envelope detector is MV e ( t ) R I andthe final output of the circuit is ˙ M ≈ − α S R I R C V C ( t ) ; V u ( t ) = M V e (12)where, M = MR I . Comparing equations (5) and (12) we seethat α k = − α S R I R C and K = M where M ∈ (cid:20) R Son R I , R Soff R I (cid:21) . R I is tuned to get the desired range of gain while R C is afree parameter which can be tuned according to the needs. D. Charge Saturator Block
This block limits the memristor to work in its safe zonehence ensuring validity of equation (4). In safe zone the t ( in ms ) V e ( i n V ) t ( in ms ) V C ( i n V ) t ( in ms ) V m ( i n V ) t ( in ms ) V f ( i n V ) t ( in ms ) V u ( i n V ) t ( in ms ) T r ue O u t pu t ( i n V ) a)b)c) d)e)f) Figure 4. Output of various stages of the circuit shown in Fig. 3, i.e. V m , V f and V u , corresponding to inputs: V e and V C . Parameters of simulationsare: HP Memristor in safe zone, ω m = 628 × rads − , R C = 100 k Ω , R I = 1 k Ω , R f C f = 0 . ms , R e C e = 0 . ms , V DD = 5 V , R s C s = 0 . s . V u is obtained by simulating the circuit show in Fig. 3.The true output is obtained by numerically solving equation (5). In bothcases we use V e and V C shown in Fig. 4a and Fig. 4b respectively as inputs. following equations are valid , dV IG dt = − V mC R s C s ; dQ M dt = V mC R C ⇒ dV IG dQ M = − R C R s C s (13)Recall that in the safe zone w ∈ [ w l , w h ] and Q M ∈ (cid:2) , Q SM (cid:3) . We assume that integrator voltage V IG = 0 when Q M = 0 (or w = w l ). Integrating equation (13) under thisassumption yields V IG = − R C R s C s Q M (14)In equation (14), if Q M = Q SM (or w = w h ), V IG = − R C Q SM R s C s .Hence, V IG ∈ (cid:104) − R C Q SM R s C s , (cid:105) when memristor is in its safezone. In Fig. 3 comparator O and O are used to compare V IG to know if the memristor is in safe zone. Note that: 1) Ref-erence voltage of comparator O and O is GN D and voltage V H (across capacitor C H ) respectively. V H is set to − R C Q SM R s C s by Synchronization Block (refer Section III-E). 2) In Fig. 3 anyMOSFET transistor couple, T i and T Ci , are in complementarystate, i.e. if T i is ON , T Ci will be OF F and vice-versa. 3)Comparator output V L and V L gives knowledge about thestate of the memristor. Also V L , V L ∈ {− V DD , V DD } . Nowdepending on V L and V L , three different cases may arise: Case 1 ( V H < V IG < ⇒ V L = V DD , V L = V DD )This implies that the memristor is in its safe zone, i.e. , T and T C will be ON making V mC = 0 or V C < making it pass through T and thereby setting V mC = V C . Case 3 ( V IG > ⇒ V L = − V DD , V L = V DD )This happens when Q M ≤ . Since Q M can only increase, T is kept OF F but T is ON . Two cases are possible: if V C < , T C and T C will be ON making V mC = 0 or V C > makingit pass through T and thereby setting V mC = V C . E. Synchronization Block
Operation of Charge Saturator Block assumes that: 1) V IG = 0 when Q M = 0 (or w = w l ). 2) Voltage V H acrosscapacitor C H equals − R C Q SM R s C s . This block ensures these twoconditions and thereby guarantees that the memristor and theintegrator in Fig. 3 are in “synchronization”. SynchronizationBlock is shown in Fig. 5a). In Fig. 5a, the op-amp with thememristor, the integrator and capacitor C H are indeed part ofthe circuit shown in Fig. 3. Such a change in circuit connectionis possible using appropriate switching circuitry. This blockoperates in two modes: Preset Mode:
We first ensure that V IG = 0 , when w = w l . Inthis mode switch S and S are ON and switch S and S are OF F . When S is closed the residual charge in capacitor C H will get neutralized thus ensuring V IG = 0 . Next we make w = w l . Note that V = − MV DD R C and V = − R Soff V DD R C . If M Soff then V > V ⇒ V L = − V DD . Hence the path ADBCof wheatstone bridge arrangement will be active making thecurrent flow from ( − ) ve to (+) ve terminal of the memristor.This will increase M till M = R Soff . If M > R Soff , the pathABDC is active making M decrease till M = R Soff . Online Calibration : Immediately after preset mode is com-plete, S and S are switched OF F and S and S areswitched ON . Now, V = − MV DD R C and V = − R Son V DD R C .In this step V L = V DD as M > R Son always. Path ABDCwill be active driving M to R Son . As V L is given as an If we tune R s and C s s.t. − R C Q SM R s C s = − V DD , V IG ∈ [ − V DD , whenmemristor is in safe zone. Then we can directly use power supply − V DD as the reference voltage for comparator O . This will eliminate the needof capacitor C H and “Online Calibration”. However if − R C Q SM R s C s (cid:54) = − V DD (due to tuning error), this approach may drive the memristor to non-safe zone. input to the integrator, capacitor C H will also get charged.Note that in this step memristor will work in safe zone. Also V IG = 0 when Q M = 0 (ensured by preset mode). Hencerelation between V IG and Q M will be governed by equation(14). When M gets equalized to R Son , Q M = Q SM , therebymaking V H = V IG = − R C Q SM R s C s .Each of the modes operate for a predefined time. Theresistors and hence the voltages V and V may get equalizedbefore the predefined time after which V L will switch rapidly.Such rapid switching can be prevented by replacing thecomparator O by a cascaded arrangement of a differentialamplifier followed by a hysteresis block.Various graphs corresponding to synchronization processare shown in Fig. 5 b), c), d), e). Memristor Gain Block andthe Integrator of Charge Saturator Block should be periodicallysynchronized to account for circuit non-idealities. One suchnon-ideality can be caused if the capacitor C H is leaky causingthe voltage V H to drift with time. Remark 3: In the discussion of the Synchronization Blockwe have slightly misused the symbols R Son and R Soff . Resis-tance of R Son and R Soff shown in Fig. 5a should be closeto the actual R Son and R Soff (as mentioned in Section II)respectively. It is not necessary that they should be exactlyequal. However, there resistance must lie within the safe zone.This alleviates analog implementation by eliminating the needof precision resistors . It should be noted that the maximumand the minimum resistance of the memristor in safe zoneis governed by the resistances R Son and R Soff used in theSynchronization Block not the actual R Son and R Soff .To conclude, in this section we designed an Analog GainControl framework using Memristor. Schematic of MemristiveAGC is shown in Fig. 2 whose circuit parameters can be tunedusing Claim 1 . Memristive AGC’s designed in this work is“generic” in the sense that it can be used to implement severalGain-Scheduled control algorithms.IV. C ONTROL S TRATEGY As mentioned in the introduction, we are interested indesigning Robust Adaptive Control Algorithms to tackle issueslike parameter uncertainty (caused by design errors) and timevarying nature (caused by ageing effect and atmospheric varia-tion). ‘ Simplicity ’ is the key aspect of any control algorithm tobe implementable in analog framework as we do not have theflexibility of ‘ coding ’. Robust Adaptive Control Algorithmsfound in control literature cannot be implemented in an analogframework due to their complexity. Here we propose a simplegain-scheduled robust adaptive control algorithm which canbe easily implemented using Memristive AGC discussed inSection III. We prove the stability of the proposed algorithmusing Lyapunov-Like method in Section IV-A. Notations: The notations used in this paper are quite stan-dard. R + and R n denotes the set of positive real numbersand the n-dimensional real space respectively. I representsidentity matrix of appropriate dimension. |·| is the absolutevalue operator. ∅ represents a null set. The bold face symbols S and S + represents the set of all symmetric matrices and pos-itive definite symmetric matrices respectively. inf ( · ) ( sup ( · ) ) Searching SearchingFound SearchingSearching Found C a s e Scan to Rest ModeRest to Scan Mode Searching SearchingSearching SearchingSearchingSearchingFoundJump NOT Allowed Stabilizing Gain Set RGS Gain Legend C a s e a)c) b) R e f l ec ti on a t k M t = t o t = t o + Tt = t o + 2 T t = t o t = t o + 2 Tt = t o + T k m k M k m k M t = t k m k M k m k M k m k M t = t t = t t = t t = t k m k M t = t + δtk m k M t = t + δt k M − k m T ˙ EE − α − γα h ˙ EE Figure 6. a) Hysteresis Function as mentioned in equation (16). T is thescan time. α ∈ R + − { } and γ ∈ [0 , are tuning constants. b) First siximages shows the scan/rest mode of RGS. The last three images depicts theconcept of drifting . c) Figure showing the worst case scan time. represents the infimum (supremum) of a set. For a matrix A , λ m ( A ) and λ M ( A ) denotes the minimum and maximumeigenvalue of A respectively. A (cid:22) B implies B − A is positivesemi-definite while A ≺ B implies B − A is positive definite.The euclidean norm of a vector and the induced spectral normof a matrix is denoted (cid:13)(cid:13) · (cid:13)(cid:13) . The operator × when appliedon sets implies the cartesian product of the sets. Conv {A} implies the convex hull of set A . Analysis presented futherin this paper uses ideas from real analysis, linear algebraand convex analysis. For completeness, these ideas are brieflyreviewed in Appendix C. A. Reflective Gain Space Search (RGS) Consider a SISO uncertain linear time variant (LTV) systemwith states x = (cid:2) x x · · · x N (cid:3) T ∈ R N , input u ∈ R andoutput y ∈ R described by the following state space equation ˙ x = A ( t ) x + B ( t ) u ; y = Cx = x where, (15) A ( t ) = · · · ... ... . . . ... · · · a ( t ) a ( t ) · · · a N ( t ) B ( t ) = ... b ( t ) The output matrix C = (cid:2) · · · (cid:3) . Assumptions :1) The order N of the system is known.2) The variation of a i ( t ) , ∀ i = 1 , , . . . , N and b ( t ) due toparameter uncertainty and time variation is bounded, i.e. ( A ( t ) , B ( t )) belongs to a bounded set L . L is assumedto be a connected set. A ( t ) and B ( t ) are not known butknowledge of L is assumed. This assumption basicallymeans that we don’t have knowledge about a i ( t ) and b ( t ) however we know the range in which they lie.3) (cid:13)(cid:13) ˙ A (cid:13)(cid:13) ≤ δ A and (cid:13)(cid:13) ˙ B (cid:13)(cid:13) ≤ δ B . Knowledge of δ A and δ B is assumed. This assumption basically means that A ( t ) and B ( t ) are continous in time. 4) Let b ( t ) ∈ [ b l , b u ] s.t. b l ≤ b u , and either b l > or b u < . This choice of b l and b u is explained in Section IV-B.Example to clarify the concept of L , δA and δB is dealtlater in Section IV-B however we give the following exampleto better explain Assumption 4 . Example 1: Consider two cases: 1) b ( t ) ∈ [ − , b ( t ) ∈ [0 . , . According to Assumption 4 , Case 2 is possible while Case 1 is not. This example clearly illustrates that accordingto Assumption 4 the sign of b ( t ) does not change with timeand is known with absolute certainty. Note that sign of b ( t ) decides the sign of static gain of the system. So Assumption4 in certain sense means that the sign of static gain does notchange with time and is known with absolute certainty. Forall practical scenario such an assumption is valid.Notice that all assumptions are mild from practical view-point. Our aim is to regulate the output around the operatingpoint y ∗ = x ∗ = r . Conventional set-point control consistof a bias term u b ( t ) plus the regulation control input u r ( t ) ,i.e. u ( t ) = u b ( t ) + u r ( t ) . Here we assume that u b ( t ) isdesigned s.t. x ∗ = (cid:2) r . . . (cid:3) T is the equilibrium pointand concentrate on the synthesis of u r ( t ) . For simplicitywe consider r = 0 , i.e. x ∗ = (cid:2) . . . (cid:3) T is theequilibrium point. As we are dealing with a linear systemthe same analysis is valid for r (cid:54) = 0 . The controller structureis, u r ( t ) = − K ( t ) y ( t ) , where K ( t ) ∈ [ k m , k M ] is thevariable gain s.t. < k m < k M . Let E = x T P x be theLyapunov Candidate Function with P ∈ S + . Then RGS is assimple Gain-Scheduled control strategy given by the followingequation ˙ K = sgn · h (cid:32) ˙ EE (cid:33) ; u r = − Ky (16)where, sgn ∈ {− , } and h (cid:16) ˙ EE (cid:17) is a hysteresis functionshown in Fig. 6a. Working of RGS can be explained as:1) RGS finds the stabilizing gain , i.e. the gain whichrenders ˙ E < ( ˙ E < − αE in a more strict sense),by reflecting back and forth between [ k m , k M ] . RGS issaid to be in "Scan Mode" when it is scanning for thestabilizing gain. It goes to "Rest Mode" when stabilizinggain is found. RGS Scan Cycle is clearly depicted in thefirst six images of Fig. 5b).2) RGS uses ˙ E as stability indicator. ˙ E is found bydifferentiating E which in turn is calculated using E = x ( t ) T P x ( t ) . To get the states x ( t ) we differentiate theoutput y ( t ) N − times.3) Scan Mode is triggered when ˙ E > − γαE (refer Fig.6a). Scan Mode is associated with a scan time of T , i.e. time taken to scan from k m to k M . Hence, h (cid:16) ˙ EE (cid:17) = k M − k m T . The value of sgn is when gainspace is searched from k m to k M and − otherwise.Scan mode operates till ˙ E > − αE . Unlike LTI system, static gain for LTV system may not be well defined.Here we define static gain of LTV system (15) as b ( t ) a ( t ) . Use of time invariant Lyapunov Function to analyse stability of LTVsystems has been used in [25] (Chap. 5, 7) and [26] (Chap. 3). The term ”stabilizing gain” has been slightly misused. Stability of LTVsystem cannot be assured even if the closed loop system matrix, A ( t ) − B ( t ) K ( t ) C , has negative real eigen part for all t > ([27], [28]). 4) Rest Mode is triggered when ˙ E < − αE . In this mode h (cid:16) ˙ EE (cid:17) = 0 , i.e. the stabilizing gain is held constant.Rest mode operates till ˙ E < − γαE .5) In the process of finding stabilizing gain, LTV system may expand ( E increases) in Scan Mode and will contract ( E decreases) in Rest Mode. RGS ensures thateven in the worst case, contraction is always dominantover expansion, guaranteeing stability.Stabilizing Gain Set K P,s ( t ) for a given P (cid:31) and s ∈ R + at time t is defined as K P,s ( t ) = (cid:110) K ∈ R : A C ( t ) T P + P A C ( t ) (cid:22) − sI,A C ( t ) = A ( t ) − B ( t ) KC, k m ≤ K ≤ k M (cid:111) Lemma 1: If K P,s ( t ) (cid:54) = ∅ , ∀ t ≥ then under Assumption3 , K P,s ( t ) (cid:84) K P,s ( t + δt ) (cid:54) = ∅ if δt → . Proof: This lemma basically means that stabilizing gain setwill drift. If the stabilizing gain set is drifting then there willbe an intersection between stabilizing gain sets at time t and t if t − t is small. Drifting is obvious under Assumption 3 however a formal proof is given in Appendix A.Concept of “drifting” is clearly depicted in the last threeimages of Fig. 5b. Notice that at t = t the stabilizing gain setis almost found. Two possible cases are shown in Fig. 5b for t = t + δt where δt → . In the first case the stabilizing gainset is drifting while in the other case the stabilizing gain setjumps. In the first case the stabilizing gain is found and RGSgoes to Rest Mode. In the second case RGS will just miss thestabilizing gain set. Hence if the stabilizing set keeps jumping,the scan mode may never end. Hence if the stabilizing setkeeps jumping, the scan mode may never end. As mentionedin Lemma 1 , stabilizing set K P,s ( t ) never jumps if Assumption3 is valid. Therefore Lemma 1 is important to guarantee anupper bound on the time period of Scan Mode from which weget the following Lemma. Lemma 2: If K P,s ( t ) (cid:54) = ∅ , ∀ t ≥ then under Assumption3 , the maximum time period of Scan Mode is T . Proof: It is a direct consequence of Lemma 1 . The worstcase time period of T happens only in two cases. Both thecases are shown in Fig. 6c. Theorem 1: LTV system characterized by equation (15) andset L is stable under RGS Control Law proposed in equation(16) if it satisfies the following criterion1) [C1] If corresponding to all ( A, B ) ∈ L there existatleast one gain K AB ∈ [ k m , k M ] s.t. ( A − BK AB C ) T P + P ( A − BK AB C ) (cid:22) − sI (17)for some s ∈ R + .2) [C2] Scan time T < λ m ( P ) α ( − γ ) δ max ( λ L M , ) . Here γ ∈ [0 , , δ = δ A + δ B k M and λ L M = max Q ∈ S L Q λ M ( Q ) where the set S L Q = (cid:110) Q : Q = ( A − BKC ) T P + P ( A − BKC )( A, B ) ∈ L , K ∈ [ k m , k M ] (cid:111) The stabilizing gain set K P,s ( t o ) at time t = t o , is just a “Lyapunov-Way” of describing the set of gains which will stabilize the correspondingLTI system ( A ( t o ) , B ( t o ) , C ) at time t = t o . Proof: Consider the Lyapunov candidate E = x T P x whichwhen differentiated along systems trajectory yields ˙ E = x T Q ( t ) x ≤ λ M ( Q ( t )) (cid:13)(cid:13) x (cid:13)(cid:13) ≤ λ L M (cid:13)(cid:13) x (cid:13)(cid:13) (18)where, Q ( t ) = A C ( t ) T P + P A C ( t ) and A C ( t ) = A ( t ) − B ( t ) K ( t ) C . Definitely, Q ( t ) ∈ S as it is a sum ofthe matrix P A C ( t ) and its transpose A C ( t ) T P . Note that, λ M ( Q ( t )) ≤ λ L M , ∀ ( A, B ) ∈ L and ∀ K ∈ [ k m , k M ] . When λ L M < , stability is obvious as according to inequality (18) ˙ E < , ∀ ( A, B ) ∈ L and ∀ K ∈ [ k m , k M ] . As ˙ E < forany ∀ K ∈ [ k m , k M ] , stability if guaranteed even if scanningis infinitely slow, i.e. scan time T → ∞ . This is in accordancewith [C2] .We now consider the case when λ L M > . Say at time t = t ∗ , E = E ∗ and the system goes to scan mode to search for astabilizing gain. From inequality (18) we have, ˙ E ≤ λ L M (cid:13)(cid:13) x (cid:13)(cid:13) ≤ λ L M λ m ( P ) E (19)We want to find the maximum possible increase in E in theScan Mode. At this point it is important to understand that [C1] is another way of saying that K P,s ( t ) (cid:54) = ∅ , ∀ t ≥ .Hence if [C1] is true, then Lemma 2 can be used to guaranteethat the maximum period of scan mode is T . Let E = E s and t = t s at the end of scan mode. Maximum expansion in E is obtained by integrating inequality (19) from t = t ∗ to t = t ∗ + 2 T E S ≤ β s E ∗ where, (20) β s = exp (cid:16) λ L M Tλ m ( P ) (cid:17) , the worst case expansion factor. The endof scan mode implies that a stabilizing gain is found, i.e. at t = t s , ˙ E ( t s ) ≤ − αE ( t s ) (refer Fig. 6a), and the rest modestarts. Here it is important to note the relation between α and s (refer [C1] ). [C1] assures that at t = t s a gain K can befound s.t. ˙ E ( t s ) ≤ λ M ( Q ( t s )) (cid:107) x (cid:107) = − s (cid:107) x (cid:107) ≤ − sλ M ( P ) E ( t s ) (21)Comparing inequality (21) with Fig. 6a we get α = sλ M ( P ) .Let τ be the duration of rest mode. We want to find theminimum possible decrease in E in rest mode before it againgoes back to scan mode. Again, ˙ E = x T Q ( t ) x = x T [ Q ( t s ) + ∆ ( t )] x = ˙ E ( t s ) + x T ∆ ( t ) x where, (22) ∆ ( t ) = Q ( t ) − Q ( t s ) . As K ( t ) = K ( t s ) ; t s ≤ t ≤ t s + τ ( ˙ K = 0 in rest mode), ∆ ( t ) can be expanded as ∆ ( t ) = [∆ A ( t ) − ∆ B ( t ) K ( t s ) C ] T P + P [∆ A ( t ) − ∆ B ( t ) K ( t s ) C ] (23)where, ∆ A ( t ) = A ( t ) − A ( t s ) and ∆ B ( t ) = B ( t ) − B ( t s ) .Taking norm on both side of equation (23) yields (cid:13)(cid:13) ∆ ( t ) (cid:13)(cid:13) ≤ (cid:13)(cid:13) P [∆ A ( t ) − ∆ B ( t ) K ( t s ) C ] (cid:13)(cid:13) ≤ (cid:13)(cid:13) P (cid:13)(cid:13) (cid:0)(cid:13)(cid:13) ∆ A ( t ) (cid:13)(cid:13) + (cid:13)(cid:13) ∆ B ( t ) (cid:13)(cid:13) (cid:13)(cid:13) K ( t s ) (cid:13)(cid:13) (cid:13)(cid:13) C (cid:13)(cid:13)(cid:1) ≤ λ M ( P ) δ ∆ t (24) where, δ = δ A + δ B k M and ∆ t = t − t s . Note that (cid:13)(cid:13) C (cid:13)(cid:13) = 1 .Also (cid:107) P (cid:107) = λ M ( P ) for all P ∈ S + . Getting back to equation(22) we have ˙ E = ˙ E ( t s ) + x T ∆ ( t ) x ≤ − s (cid:13)(cid:13) x (cid:13)(cid:13) + (cid:13)(cid:13) ∆ ( t ) (cid:13)(cid:13) (cid:13)(cid:13) x (cid:13)(cid:13) ≤ − s (cid:13)(cid:13) x (cid:13)(cid:13) + (cid:13)(cid:13) ∆ ( t ) (cid:13)(cid:13) (cid:13)(cid:13) x (cid:13)(cid:13) ≤ − [ s − λ M ( P ) δ ∆ t ] (cid:13)(cid:13) x (cid:13)(cid:13) ≤ − ( α − δ ∆ t ) E (25)To find the minimum possible value of τ we can substitute ˙ E = − γαE in inequality (25). This gives, τ ≥ α (1 − γ )2 δ (26)Hence, τ min = α (1 − γ )2 δ . Let E = E r at the end of rest mode.We integrate inequality (25) from t = t s to t = t s + α (1 − γ )2 δ to find the minimum possible contraction in rest mode, E r ≤ β r E s where, (27) β r = exp (cid:18) − α ( − γ ) δ (cid:19) , the worst case contraction factor.Using inequality (20) and (27) we get E r ≤ βE ∗ where, (28) β = β s β r = exp (cid:18) − α ( − γ ) δ + λ L M Tλ m ( P ) (cid:19) . If β ∈ (0 , , therewill be an overall decrease in E in a rest-scan cycle. β ∈ (0 , can be assured if T < λ m ( P ) α (cid:0) − γ (cid:1) δλ L M (29)Now we want to prove that inequality (29) ensures that E ( t ) → (and hence x ( t ) → ) as t → ∞ .Beforeproceeding forward we note two things: 1) Theoreticallypredictable duration of a rest-scan cycle is T rs = 2 T + α (1 − γ )2 δ .2) T rs is a conservative estimate of the duration of a rest-scancycle. Hence it may as well happen that the rest mode lastsfor a longer time leading to unpredictable contraction. This isclearly shown in Fig. 7. Now we want to put an upper boundon E ( t ) . Say we want to upper bound E ( t ) in any one bluedots shown in Fig. 7, i.e. in the green zones. This can be doneas follows E ( t ) ≤ E o β η ( t ) exp ( − γα ( t − η ( t ) T rs )) (30)where, E = E o at t = 0 , η ( t ) is the number of complete rest-scan cycle before time t . In inequality (30), β η ( t ) isthe predictable contraction factor contributed by the red zonesin Fig. 7 and the last term is the unpredictable contractionfactor contributed by the green zones in Fig. 7. Note that ingreen zones all we can assure is that, ˙ E ≤ − γαE , whichwhen integrated yields the last term of inequality (30). Nowwe want to upper bound E ( t ) in one of the red dots shownin Fig. 7 which can be done as As E = x T P x ≥ λ m ( P ) (cid:107) x (cid:107) it implies (cid:107) x (cid:107) ≤ (cid:113) Eλ m ( P ) where λ m ( P ) > as P (cid:31) . Hence if E → then (cid:107) x (cid:107) → implying x → . Without additional knowledge of system dynamics, η ( t ) is not pre-dictable. 00 TIME0 TIME t t t p t p t p t p − γα − α ˙ EE E S E E S t E R E R Unpredictable Rest ModesPredictable Rest−Scan Cycle Duration Figure 7. Plot showing the actual and predictable duration of rest-scancycle. Red zones shows the predictable duration, i.e. (cid:0) t pi − t i − (cid:1) = T rs ,while the actual duration is t i − t i − . Also note that E Ri ≤ βE Si , thetheoretically predictable contraction in a rest-scan cycle. The green zonesshows the unpredictable rest modes. E ( t ) ≤ E o β s β η ( t ) exp (cid:16) − γα ( t − T rs − η ( t ) T rs ) + (cid:17) (31)In inequality (31), the operator ( a ) + = max (0 , a ) . Inequal-ity (31) resembles (30) except that in this case the system maybe in scan mode leading to the extra expansion factor β s in theexpression. There is an extra − T rs in the last term to deductthe predictable time of the current rest-scan cycle. Amonginequality (30) and (31) , (31) is definitely the worst case upperbound on E ( t ) due to the presence of two additional terms,i.e. − T rs inside exp ( − γα ( · )) and β s ≥ . From inequality(31) it is clear that if β ∈ (0 , then, E ( t ) → as t → ∞ .This completes the proof of Theorem 1. Remark 4: From inequality (31) convergence is better if β is small. According to inequality (28), β decreases as α (or s ) increases and T decreases. The effect of γ on β is moreinvolved. In inequality (31), if γ increases then β increases(refer inequality (28)). Thus predictable contraction decreases(due to large β ) but unpredictable contraction increases (dueto large γ ). Hence γ can be neither too high nor too low. Remark 5: Note that RGS strategy doesn’t impose anytheoretical limit on the rate of variation, δ A and δ B , of LTVsystem. This is perhaps one of the novelty of RGS. Remark 6 (RGS for LTI Systems): For uncertain LTI sys-tems, Theorem 1 will have [C1] without any change. However, [C2] will no longer impose an upper bound on T but willjust demand a finite non-zero value. This will ensure that astabilizing gain is found within a finite time period of T . B. Bilinear Matrix Inequality(BMI) Optimization Problem Foundation of RGS is based on the validity of [C1] for agiven uncertain LTV/LTI system. We pose a BMI optimizationproblem to check the validity of [C1] and in the process findthe value of P and s needed for implementing RGS.We start our discussion by formally defining the set L .Let A ( t ) and B ( t ) be functions of p independent physicalparameters Θ = [Θ , Θ , . . . , Θ p ] T , i.e. ( A ( t ) , B ( t )) = F (Θ ( t )) . Θ ( t ) is time varying and at every time instant is also associated with a uncertainty set because of parameteruncertainty. We assume that every physical parameter Θ i isbounded, i.e. Θ i ( t ) ∈ (cid:2) Θ Li , Θ Hi (cid:3) . Then Θ ( t ) ∈ S , where S = (cid:2) Θ L , Θ H (cid:3) × (cid:2) Θ L , Θ H (cid:3) × . . . × (cid:2) Θ Lp , Θ Hp (cid:3) a p − dimensionalhypercube. We assume no knowledge of the time variation of Θ , i.e. Θ ( t ) , but we assume the knowledge of S . Then L isthe image of S under the transformation F , i.e. F : S → L where S ⊂ R p and L ⊂ R N × N × R N × . Remark 7: The system described by equation(15) can be represented using the compact notation (cid:2) a a · · · a N | b (cid:3) . Hence one can assume that L ⊂ R N +1 rather than L ⊂ R N × N × R N × . This reducesnotational complexity by making the elements of L a vectorrather than ordered pair of two matrix.At this point we would be interested in formulating [C1] as an optimization problem. With a slight misuse of variable s we can state the following problem. Problem 1: [C1] holds if and only if the optimal value ofthe problem minimize: s subject to: ( A − BK AB C ) T P + P ( A − BK AB C ) (cid:22) sI ∀ ( A, B ) ∈ L P (cid:31) with the design variables s ∈ R , P ∈ R N × N and K AB ∈ [ k m , k M ] is negative.Note the use of K AB instead of K in Problem 1. It isto signify that we do not need a common gain K for all ( A, B ) ∈ L . Perhaps we can have seperate gains K AB forevery ( A, B ) ∈ L satisfying Problem 1 and the RGS strategydescribed in Section IV-A will search for it. However theoptimization problem described in Problem 1 is semi-infinite and is hence not computationally tractable. We will now pose Problem 1 as a finite optimization problem.We can always bound the compact set L by a convexpolytope. Define a polytope P = Conv {V} where V = { ( A i , B i ) : i = 1 , , . . . , m } , the m vertices of the convexpolytope s.t., if ( A, B ) ∈ L then ( A, B ) ∈ P . Then L ⊆ P .We now give the following example to illustrate conceptsrelated to S , L and P (discussed above) and also discuss howto calculate δ A and δ B (discussed in Assumption 3 ). Example 2: Consider the following scalar LTV system: ˙ x = a ( t ) c ( t ) b (cid:48) x + (cid:112) b (cid:48) c ( t ) a ( t ) / u ; y = x (32)Here a , b (cid:48) , c are the physical parameters which mayrepresent quantities like mass, friction coefficient, resistance,capacitance etc. a and c are time varying while b (cid:48) is an uncer-tain parameter. a and c varies as: a ( t ) = a ∗ − a ∗ exp (cid:16) − tτ a (cid:17) , c ( t ) = c ∗ + c ∗ exp (cid:16) − tτ c (cid:17) and b (cid:48) lies in the uncertaintyset (cid:104) b ∗ , b ∗ (cid:105) . Here physical parameter Θ = [ a, b, c ] andhence p = 3 . From the time variation of a and c we canconclude that a ∈ (cid:104) a ∗ , a ∗ (cid:105) and c ∈ (cid:104) c ∗ , c ∗ (cid:105) . Therefore, Semi-Infinite optimization problems are the ones with infinite constraints. For a given L , P is not unique. b)c) ( a ∗ / , b ∗ , c ∗ / 2) ( a ∗ , b ∗ / , c ∗ ) ( a , b )( a , b ) = F ! a, b ′ , c "! a,b ′ , c " a) Figure 8. a) Uncertainty set S associated with the physical parameters. Thecoordinates of two diagonally opposite vertices is shown in the figure. b)Uncertainty set L associated with the LTV system characterized by equation(15). c) Bounding Convex Polytope P of set L . The red lines are the edgesof the polytope P . P has vertices which forms the elements of set V . S ∈ (cid:104) a ∗ , a ∗ (cid:105) × (cid:104) b ∗ , b ∗ (cid:105) × (cid:104) c ∗ , c ∗ (cid:105) which is an hypercubeas shown in Fig. 8a.To find set L first note that for this example N = 1 asthe system described by equation (32) is scalar. Therefore L ⊂ R and every element [ a , b ] T ∈ L is a map [ a , b ] T = F (cid:16) a, b (cid:48) , c (cid:17) where F (cid:16) a, b (cid:48) , c (cid:17) = (cid:34) a c b (cid:48) √ b (cid:48) ca / (cid:35) (33)One method to obtain set L is to divide the hypercube S into uniform grids and then map each grid using equation(33). Set L for this example is shown in Fig. 8b. Though thismethod to obtain L from S is computationally expensive, onemust appreciate that for a general system there is no otherelegant method. Now we want to find a convex polytope P which bounds L . One such polytope is shown in Fig. 8c. Thispolytope has m = 5 vertices. In practise polytope P can befound using convex-hull functions available widely in manyprogramming languages. One popular reference is [29].We now discuss how to calculate δA and δB for thisexample. At this point one must appreciate that in mostpractical scenario the controller designer may not explicitlyknow the equations governing the rate of change of physicalparameters, i.e. say in this example the designer may notknow a ( t ) and c ( t ) explicitly. However it seems practicalto assume knowledge of the bounds on the rate of change ofphysical parameters, i.e. controller designer may know that < ˙ a ≤ a ∗ τ a and − c ∗ τ c ≤ ˙ c < . There is no standard way tocalculate δA and δB from these bounds. Also dependent onthe method used, one may get different estimate of δA and δB . For this example δA and δB is calculated as follows. δ A = max (cid:16)(cid:12)(cid:12)(cid:12) ddt (cid:16) a ( t ) c ( t ) b (cid:48) (cid:17)(cid:12)(cid:12)(cid:12)(cid:17) = max (cid:18) | c a ˙ a +2 a c ˙ c | b (cid:48) (cid:19) = max ( c ) sup( a ) max( ˙ a ) , a ) sup( c ) max( | ˙ c | ) ) inf ( b (cid:48) )= max ( c ∗ ) ( a ∗ ) ( a ∗ τa ) , a ∗ ) ( c ∗ ) ( c ∗ τc )) ( b ∗ ) = ( a ∗ ) ( c ∗ ) b ∗ ) max (cid:16) τ a , τ c (cid:17) δ B = max (cid:18)(cid:12)(cid:12)(cid:12)(cid:12) ddt (cid:18) √ b (cid:48) c ( t ) a ( t ) / (cid:19)(cid:12)(cid:12)(cid:12)(cid:12)(cid:19) = max (cid:16) √ b (cid:48) (cid:12)(cid:12)(cid:12) − √ c ˙ a a / + ˙ c √ ca / (cid:12)(cid:12)(cid:12)(cid:17) = (cid:112) sup ( b (cid:48) ) (cid:18) √ sup( c ) ( a ∗ τa ) a ) / + ( c ∗ τc ) √ inf( c ) inf( a ) / (cid:19) = √ c ∗ ( a ∗ ) / (cid:16) / τ c + / τ c (cid:17) Lemma 3: Under Assumption 4 , if for a given P ∈ R N × N and s ∈ R there exist a K i ∈ [ k m , k M ] s.t. ( A i − B i K i C ) T P + P ( A i − B i K i C ) (cid:22) sI, ( A i , B i ) ∈ V then there exist a K AB ∈ [ k m , k M ] s.t. ( A − BK AB C ) T P + P ( A − BK AB C ) (cid:22) sI ∀ ( A, B ) ∈ L Proof: We first define a N × N matrix Γ all elements ofwhich are except its ( N, element which is . Also notethat all ( A, B ) ∈ L can be written as a convex combinationof the elements V of the convex polytope P . Mathematically, ∀ ( A, B ) ∈ L there exists scalars θ i ≥ , i = 1 , , . . . m s.t. ( A, B ) = m (cid:88) i =1 θ i ( A i , B i ) and m (cid:88) i =1 θ i = 1 where, ( A i , B i ) ∈ V ∀ i = 1 , , . . . , m . Now, ( A − BK AB C ) T P + P ( A − BK AB C )= (cid:0) A T P + P A (cid:1) − bK AB (cid:0) Γ T P + P Γ (cid:1) = m (cid:88) i =1 θ i (cid:0) A Ti P + P A i (cid:1) − (cid:0) Γ T P + P Γ (cid:1) K AB m (cid:88) i =1 θ i b i (34)Equation (34) is possible because BC = b ( t ) Γ owing to thespecial structure of B and C matrix. Using the inequality ( A i − B i K i C ) T P + P ( A i − B i K i C ) (cid:22) sI, ( A i , B i ) ∈ V in equation (34) we have ( A − BK AB C ) T P + P ( A − BK AB C ) (cid:22) (cid:34) m (cid:88) i =1 θ i K i b i − K AB m (cid:88) i =1 θ i b i (cid:35) (cid:0) Γ T P + P Γ (cid:1) + sI (35)Choosing, K AB = m (cid:88) i =1 θ i K i b i m (cid:88) i =1 θ i b i in inequality (35) yields ( A − BK AB C ) T P + P ( A − BK AB C ) (cid:22) sI Now all we need to do is to prove that the chosen K AB lies in the interval [ k m , k M ] . We know that k m ≤ K i ≤ k M .Therefore under Assumption 4 , Without Assumption 4, the denominator (cid:80) mi =1 θ i b i may become . m (cid:88) i =1 θ i k m b im (cid:88) i =1 θ i b i ≤ K AB ≤ m (cid:88) i =1 θ i k M b im (cid:88) i =1 θ i b i ⇒ k m ≤ K AB ≤ k M This concludes the proof of Lemma 3. The importanceof Lemma 3 is that it reduces the semi-infinite optimizationproblem posed in Problem 1 into a finite optimization problem.This results into the most important result of this section. Theorem 2: [C1] holds if the optimal value of the problem minimize: s subject to: ( A i − B i K i C ) T P + P ( A i − B i K i C ) (cid:22) sI ∀ ( A i , B i ) ∈ V k m ≤ K i ≤ k M , i = 1 , , . . . , mP (cid:31) with design variables s ∈ R , P ∈ R N × N and K i ∈ R , i =1 , , . . . , m is negative. Proof: The proof follows from Problem 1 and Lemma 3 .Note that while Problem 1 is a necessary and sufficient condition, Theorem 2 is a sufficient condition. This is due tothe fact that L ⊆ P leading to some conservativeness in theoptimization problem proposed in Theorem 2.Theorem 2 poses the classical BMI Eigenvalue MinimizationProblem (BMI-EMP) in variables P and K i . As such BMI’sare non-convex in nature leading to multiple local minimas.Several algorithms to find the local minima exist in literature(see [30], [31]). Algorithms for global minimization of BMI’sis rather rare and have received attention in works like [32],[33]. Our approach is similar to [32] which is basically a Branch and Bound algorithm. Such an algorithm works bybounding s by a lower bound Φ L and an upper bound Φ U ,i.e. Φ L ≤ s ≤ Φ U . The algorithm then progressively refinesthe search to reduce Φ U − Φ L . Our main algorithm consists of Algorithm 4.1 of [32] (Page 4). The Alternating SDP method mentioned in [34] (Page 2) is used for calculating Φ U . Forcalculating Φ L we have used the convex relaxation techniquefirst introduced in [33] (also discussed in [34], equation (9)).In Appendix B we present a working knowledge of ouralgorithm. For detailed explanation the readers may refer thecorresponding work [32], [34]. Theorem 2 poses an optimization problem with ( A i , B i ) ∈V , k m and k M as inputs and P , s and K i , i = 1 , , . . . , m as outputs. But k m and k M are not known. An initial es-timate of k m = k RHm and k M = k RHM is obtained byusing Routh-Hurwitz criteria for each ( A i , B i ) ∈ V . Let K i = K RHi , i = 1 , , . . . , m be the output of the optimizationproblem with this initial estimate. Let k ∗ m = min ≤ i ≤ m (cid:0) K RHi (cid:1) and k ∗ M = max ≤ i ≤ m (cid:0) K RHi (cid:1) , then the following holds:1) k ∗ m ≥ k RHm and k ∗ M ≤ k RHM . This is because k RHm ≤ K RHi ≤ k RHM , ∀ i = 1 , , . . . , m and hence k RHm ≤ min ≤ i ≤ m (cid:0) K RHi (cid:1) ≤ max ≤ i ≤ m (cid:0) K RHi (cid:1) ≤ k RHM Routh-Hurwitz criteria is used to find the bounds on the feedback gainsfor which a SISO LTI system is closed loop stable. Refer [35] for details. 2) The outputs, P , s and K i , obtained by running the op-timization algorithm with a) k m = k RHm and k M = k RHM or b) k m = k ∗ m and k M = k ∗ M , will be the same. This isbecause the gains K RHi obtained by running the optimizationalgorithm with k m = k RHm and k M = k RHM also satisfies thebounds k ∗ m ≤ K RHi ≤ k ∗ M .Therefore if we choose k m = k ∗ m and k M = k ∗ M we would get a smaller RGS gain set, i.e. ( k ∗ M − k ∗ m ) ≤ (cid:0) k RHM − k RHm (cid:1) , without compromising the convergence coeffi-cient s . A smaller RGS gain set will ease the controller designin an analog setting.We now give a bound on λ L M (defined in Theorem 1 ). Itis not possible to calculate λ L M with the formula given in Theorem 1 as it will involve search over the dense set S L Q .Define a set S P Q = (cid:110) Q : Q = ( A − BKC ) T P + P ( A − BKC )( A, B ) ∈ P , K ∈ [ k m , k M ] (cid:111) Let λ P M = max Q ∈ S P Q λ M ( Q ) . As L ⊆ P it is obvious that, S L Q ⊆ S P Q ( S L Q defined in Theorem 1). Therefore , λ L M ≤ λ P M . Thus λ P M gives an estimate of λ L M by upper bounding it. It can beshown that for a scalar gain K and the specific structure of B and C (refer equation (15)), it can be proved that S P Q iscompact convex set. Also λ M ( Q ) is a convex function for all Q ∈ S + (refer Page 82 of [36]). It is well known that globalmaxima of a convex function over a convex compact set onlyoccurs at some extreme points of the set (refer [37]). Thusthe problem of maximizing λ M ( · ) over Q ∈ S P Q reduces tomaximizing λ M ( · ) over Q ∈ S V Q where S V Q = (cid:110) Q : Q = ( A − BKC ) T P + P ( A − BKC )( A, B ) ∈ V , K ∈ { k m , k M } (cid:111) the set of vertices of S P Q . This leads to the following formula λ L M ≤ λ P M = max Q ∈ S V Q λ M ( Q ) (36)Inequality (36) can be used to obtain an estimate of λ L M . Remark 8: As mentioned in the beginning of Section IV, fora controller to be implementable in analog framework it hasto be simple. Though the synthesis of RGS parameters ( k m , k M , T , α , γ and P ) is complex, one must appreciate thatdesigning a controller is a ’one time deal’ . RGS in itself is asimple gain-scheduled controller governed by equation (16).V. E XAMPLE Parallel Plate Electrostatic Actuator (PPA) shown in Fig. 9aforms a vital component of several miniaturized systems. Weperform regulatory control of PPA to show effectiveness ofthe proposed analog architecture and RGS strategy. PPA’s asdescribed in [20] (page 183) follows the following dynamics m ¨ y + b ˙ y + ky = εA G − y ) V s (37) This is more like saying that the maximum of a function ( λ M ( · ) here)over a bigger set ( S P Q here) will be greater than the maximum over a smallerset ( S L Q here). which is nonlinear in nature. Plant parameter includes springconstant κ , damping coefficient b , moving plate mass and area m and A respectively, permittivity ε and maximum plate gap G . As we are interested in only regulatory control, we use thefollowing linearized model (cid:20) ˙ x ˙ x (cid:21) = (cid:34) − κ ( G − G o ) m ( G − G o ) − bm (cid:35) (cid:20) x x (cid:21) + (cid:34) √ εAκG o m ( G − G o ) (cid:35) u r (38)where, x is the displacement from the operating point G o .Plant output is the moving plate position y = G o + x .Plant input is V s = V b + V u . Comparing with RGS theory, V b = u b , the bias voltage to maintain G o as the operatingpoint and V u = u r = KV e = K ( G o − y ) = − Kx , theregulation voltage supplied by the Memristive AGC. Plantparameter includes spring constant κ , damping coefficient b ,moving plate mass and area m and A respectively, permittivity ε and maximum plate gap G . For G o > G , the system has anunstable pole. We perform regulation around G o = G .The true plant parameters are, m = 3 × − kg , b = 1 . × − N sm − , G = 10 − m , A = 1 . × − m . G and A areuncertain but lie in the set G ∈ S G = (cid:2) . . (cid:3) × − m , A ∈ S A = (cid:2) . . (cid:3) × − m . ε varies due to surroundingcondition as ε ( t ) = 5 ε o + 1 . ε o sin (7 . t ) where ε o is thepermittivity of vacuum. Spring loosening causes κ to decreaseas κ ( t ) = 0 . . e − . t N m − . Now we will discuss thesteps involved in implementing RGS in an analog framework. Step 1 (Identify S ): S is the uncertainty set of physical param-eters first defined in Page 8. Define set S ε = (cid:2) . ε o . ε o (cid:3) and S κ = (cid:2) . 08 0 . (cid:3) N m − . Then the ordered pair ( G, A, ε, κ ) ∈ S = S G × S A × S ε × S κ . Note that here p = 4 . Step 2 (Find P ): To do this we numerically map S to L (as shown in Example 2 ) and then use convhulln function ofMATLAB to find a convex polytope P s.t. L ⊆ P . In this case P consist of m = 4 vertices. We explicitly don’t mention thecomputed P for the sake of neatness. Step 3 (Compute P , s , α , k m , k M ): Solving optimizationproblem proposed in Theorem 2 we get, s = 0 . , k m =8600 and k M = 86000 and P = (cid:20) . . . . (cid:21) . Here, λ M ( P ) = 1 , hence α = sλ M ( P ) = 0 . . Step 4 (Compute λ m ( P ) , λ L M , δ ): For the calculated P , λ m ( P ) = 0 . . From equation (36), λ L M = 29 . . We willnow calculate δ A and δ B for this example. As mentionedin Example 2 the controller designer does not know κ ( t ) and ε ( t ) explicitly but they do know the bounds on ˙ κ and ˙ ε which for this example is: ( − . × . ≤ ˙ κ < and − (1 . × . × ε o ) ≤ ˙ ε ≤ (1 . × . × ε o ) .First note that (2 × element of the system matrix of thelinearized PPA model (described by equation (38)) is not timevarying. Hence δ A can be simply written as δ A = max (cid:16)(cid:12)(cid:12)(cid:12) ddt (cid:16) − κ ( t )( G − G o ) m ( G − G o ) (cid:17)(cid:12)(cid:12)(cid:12)(cid:17) = max (cid:18)(cid:12)(cid:12)(cid:12)(cid:12) ddt (cid:18) − κ ( t ) ( G − ( G )) m ( G − G ) (cid:19)(cid:12)(cid:12)(cid:12)(cid:12)(cid:19) = | ˙ κ | ) m ybmA Moving PlateOperatingPoint Fixed Plate G Hysteresis Block E R R +− Voltage Toggler a) E c)b) + +− MemristiveAnalogGain Controller + − + + t ( in s ) x ( i n µ m ) t ( in s ) K k m −8 t ( in s ) E Rest ModeScan Mode d)e)f) V sh x κV s G o R R R − V DD ˙ E T T C V L V u V L T C T C ˙ EE V C V DD − V DD x x P P − V DD V T V + ddt V e G o yP Figure 9. a) Schematic of PPA. b) Analog Implementation of E = x T P x for P ∈ R × . Note that, P = P as P ∈ S . c) Analog Implementationof RGS. d), e), f) Plots of plate position error x (from operating point G o ),normalized RGS gain Kk m and Lyapunov Function E = x T P x with respectto time. δ B = max (cid:18)(cid:12)(cid:12)(cid:12)(cid:12) ddt (cid:18) √ ε ( t ) Aκ ( t ) G o m ( G − G o ) (cid:19)(cid:12)(cid:12)(cid:12)(cid:12)(cid:19) = max (cid:32)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ddt (cid:32) (cid:113) ε ( t ) Aκ ( t ) ( G ) m ( G − G ) (cid:33)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:33) = m (cid:113) 12 sup( S A )inf( S G ) max (cid:16)(cid:12)(cid:12)(cid:12) ˙ √ εK (cid:12)(cid:12)(cid:12)(cid:17) = m (cid:113) 12 sup( S A )inf( S G ) max (cid:16)(cid:12)(cid:12)(cid:12) κ ˙ ε + ε ˙ κ √ ε √ κ (cid:12)(cid:12)(cid:12)(cid:17) = m (cid:113) 12 sup( S A )inf( S G ) (cid:18) sup( S κ ) max( | ˙ ε | )+sup( S ε ) max( | ˙ κ | )2 √ inf( S ε ) √ inf( S κ ) (cid:19) Substituting the values in the above equation of δ A and δ B weget δ = δ A + k M δ B = 1351 . Step 5 (Compute T and γ ): We arbitarily choose γ = 0 . .Substituting γ = 0 . in inequality (29) we get T < . × − s . Hence we choose T = 10 − s . Step 6 (Analog Design): Fig. 9b and 9c combined shows theanalog implementation of RGS. The error voltage V e is theplate position error, i.e. V e = − x = ( G o − y ) . The gaincontrol voltage V C is controlled by the Hysteresis Block andthe Voltage Toggler block. In Rest Mode V C = 0 therebyensuring that the gain M is constant (refer equation (12)). InScan Mode V C ∈ {− V DD , V DD } , V C = V DD to scan from R Soff to R Son and V C = − V DD to scan from R Son to R Soff (refer equation (12)). R C controls ˙ M , the rate of change ofgain (refer equation (12)). Setting R C = V DD TQ SM ensures ascan time of T . The derivation of R C is simple. Note that theresistance of memristor will change from R Soff to R Son if wepass a charge Q SM through it. We want this change to happenin time T and hence the desired current is Q SM T . As V C = V DD in scan mode, the required resistance R C = V DD ( QSM / T ) = V DD TQ SM . The Hysteresis Block is a conventional inverting schmitttrigger. Tuning R = V DD − α ) α (1 − γ ) R and R = V DD − α ) α (1+ γ ) R ensures that the schmitt trigger’s output goes from V DD to − V DD at ˙ EE = − γα and from − V DD to V DD at ˙ EE = − α . Dueto the transistor arrangement in Hysteresis Block: 1) V C = 0 when V sh = V DD . Therefore V sh = V DD implies Rest Mode.2) V C = V T when V sh = − V DD . It will be explained nextthat V T ∈ {− V DD , V DD } . Therefore V sh = − V DD impliesScan Mode. So we can conclude that the Hysteresis Blockgoes from Rest Mode to Scan Mode when ˙ EE = − γα andfrom Scan Mode to Rest Mode when ˙ EE = − α . This is inaccordance with the hysteresis shown in Fig. 6a. V T , the output of the Voltage Toggler block, toggles between − V DD and V DD . Recalling equation (12), this will result inthe gain of the memristive gain control block reflect between (cid:20) R Son R I , R Soff R I (cid:21) . We now explain the working of this block. Say V T = V DD and the zone indicating voltages (refer Fig. 3) V L = V L = V DD . As V L = V L = V DD , transistors T C and T C are OF F . The voltage V + = V T = V DD . Since V + = V DD > , V T = V DD . This shows that the output ofVoltage Toggler block is stable. As V T = V DD , memristor’sresistance M will decrease till M = R Son (see equation (12))at which point V L = − V DD and V L = V DD (refer Case 2 of III-D). T C will be momentarily ON making V + = − V DD and hence driving V T to − V DD . Then M will increase from R Son , making V L = V L = V DD and hence driving T C to OF F state. When this occurs V + = V T = − V DD . As V + = − V DD < , V T = − V DD . Similar momentary transition of T C to ON state will toggle V T from − V DD to V DD when M = R Soff .Several plots corresponding the regulatory performance ofPPA under RGS control strategy is shown in Fig. 9d, e, f. It isinteresting to observe that an LTV system can have multiplerest-scan cycle (see Fig. 9e). This is because for a time varyingsystem a stabilizing gain at a given instant may not be the stabilizing gain at a later instant due to the change in systemdynamics. Unlike LTV system, a LTI system will have only rest-scan cycle. Remark 9: In this example T is very low which may seemto challenge analog design. However for all practical purposesit is not so. For the sake of simulation we choose a fictitioustime variation of κ ( t ) and ε ( t ) which is quite fast comparedto that found in nature. Therefore δ A and δ B is high (refer Step 4 ) resulting in a high δ and hence a low scan time T (refer inequality (29). In practice, time variation of a systemcaused by ageing effect and atmospheric variation is a slowprocess. Hence, T will be much higher. Remark 10: To control an array of miniaturized devices (sayPPA) one can reduce the circuitry required by identifying thecomponents of the circuit which can be common for the entirearray. For example, Synchronization Block can be commonfor the entire array. Synchronization of each pair of coupledMemristor Gain Block and Integrator (refer Fig. 3) can bedone in a time multiplexed manner, i.e. each pair of coupledMemristor Gain Block and Integrator is synchronized using The designer is free to choose the resistance R . one Synchronization Block. The oscillator shown in the circuitof Fig. 3 can also be common for the entire array.VI. D ISCUSSION To the best of authors knowledge this paper is one of the firstattempts towards understanding the use of memristor in controlapplications. A memristive variable gain architecture has beenproposed. We then propose (and prove) RGS control strategywhich can be implemented using this framework. Simplicityof RGS control strategy is demonstrated using an example.The extension of this work can take two course.From Circuit Standpoint one may try to design an analogcircuit which mimics the circuit shown in Fig. 2 but with a lesser number of op-amps . Since the synthesis of memristoris still an engineering challenge, one may speculate regardingthe use of variable CMOS resistors (refer [21]) to implementthe analog gain controller proposed in Section III.Two milestones have to be addressed before RGS is prac-tically feasible: 1) RGS needs information about the states x which is obtained by differentiating the output y N − times .But differentiation might amplify the noise. 2) RGS relies onthe knowledge of E which is obtained by performing x T P x using analog circuitry. Such analog implementation will beeasier if P is sparse. Hence from Control Theoretic Standpoint ,addressing these two issues will be the first target of theauthors. Later point has been addressed in [38]. ExtendingRGS to SISO non-linear and in general MIMO systems wouldbe the next step. It would also be interesting to explore othersimple control strategies (like [39]) which can be implementedin analog framework. A PPENDIX AP ROOF OF L EMMA 1: D RIFTING N ATURE OF S TABILIZING G AIN S ET K P,s ( t ) To prove Lemma 1 we will take the following steps:1) Pick a gain K ∈ K P,s ( t ) .2) Prove that if δt → then there exist a ∆ K → s.t. ( K + ∆ K ) ∈ K P,s ( t + δt ) .3) As ∆ K → , ( K + ∆ K ) ∈ K P,s ( t ) . Hencethe gain K + ∆ K belongs to both the sets, K P,s ( t ) and K P,s ( t + δt ) . This implies that K P,s ( t ) (cid:84) K P,s ( t + δt ) (cid:54) = ∅ .Now we proceed with the proof. We first pick a gain K ∈K P,s ( t ) . As K P,s ( t ) (cid:54) = ∅ , ∀ t ≥ such a gain will exist. Fora time t = t ∗ there exists a gain K ∗ ∈ K P,s ( t ∗ ) and a scalar s ∗ > s s.t. the following equality holds ( A ∗ − B ∗ K ∗ C ) T P + P ( A ∗ − B ∗ K ∗ C ) = − s ∗ I (39)In equation (39) A ∗ = A ( t ∗ ) and B ∗ = B ( t ∗ ) . Equation(38) directly follows from the very definition of stabilizinggain set (defined in page 7). Lets say that at time t = t ∗ + δt there exist a K ∗ + ∆ K s.t. [( A ∗ + δA ) − ( B ∗ + δB ) ( K ∗ + ∆ K ) C ] T P + P [( A ∗ + δA ) − ( B ∗ + δB ) ( K ∗ + ∆ K ) C ] = − s ∗ I (40) In equation (40) δA = ˙ A ( t ∗ ) δt and δB = ˙ B ( t ∗ ) δt areinfinitesimal change in A and B respectively. Note that ∆ K is a scalar. Hence, substituting equation (39) in (40) yields ∆ K R = ( δA − δBK ∗ C ) T P + P ( δA − δBK ∗ C ) (41)where, R = C T ( B ∗ + δB ) T P + P ( B ∗ + δB ) C . Hence if ∆ K satisfies equation (41) then K ∗ + ∆ K ∈ K P,s ( t ∗ + δt ) .Now all we need to do is to prove that ∆ K is infinitesimal,i.e. ∆ K → if δA → and δB → . Taking norm on bothside of equation (41) we get, ∆ K (cid:107)R(cid:107) ≤ (cid:107) P (cid:107) ( (cid:107) δA (cid:107) + K ∗ (cid:107) δB (cid:107) (cid:107) C (cid:107) )∆ K ≤ (cid:107) P (cid:107) ( δ A + K ∗ δ B ) (cid:107)R(cid:107) δt (42)Since (cid:107)R(cid:107) is finite it is obvious from inequality (42) that ∆ K → as δt → . This concludes the proof.A PPENDIX BG LOBAL S OLUTION OF BMI-EMP Theorem 2 involves solving the following BMI-EMP opti-mization problem OP1: minimize: s subject to: ( A i − B i K i C ) T P + P ( A i − B i K i C ) (cid:22) sI ∀ ( A i , B i ) ∈ V k m ≤ K i ≤ k M , i = 1 , , . . . , mP (cid:31) We will first justify why OP1 is called BMI-EMP. Considera matrix inequality of type ( A − BKC ) T P + P ( A − BKC ) (cid:22) sI (43)Given a set of A , B , C , K and P , the least possible valueof s which will satisfy inequality (43) is indeed the largesteigen value of the matrix ( A − BKC ) T P + P ( A − BKC ) .So if we exclude the inequalities k m ≤ K i ≤ k M and P (cid:31) ,then OP1 can be equaly casted as min P, K i max ( A i , B i ) ∈V λ M (cid:16) ( A i − B i K i C ) T P + P ( A i − B i K i C ) (cid:17) which is basically a Largest Eigenvalue Minimization Prob-lem or just “EMP”. Now consider the function Λ ( P , K ) = λ M (cid:16) ( A − BKC ) T P + P ( A − BKC ) (cid:17) The matrix Q ( P, K ) := ( A − BKC ) T P + P ( A − BKC ) is Bilinear in the sense that it is linear in K if P is fixed andlinear in P if K is fixed. As Q ∈ S , the function Λ ( P , K ) = λ M ( Q ( P, K )) is convex in K if P is fixed and convex in P if K is fixed.We are interested in the global solution of BMI-EMP as asmaller value s will ensure better convergence of the LTV/LTIsystem. In the following we will provide a sketch of the workdone in [32], [34], [33] which will be just enough to designan algorithm for global solution of BMI-EMP. However wedon’t provide detailed explanation of the algorithm for whichthe reader may refer [32].Before proceeding forward we would like to cast OP1 in aform which can be handled by a numerical solver. Observe that the third constrain of OP1 is a strict inequality which demandsthat the Lyapunov Matrix P has to be positive definite ( not positive semi-definite). Such a strict inequality will imposenumerical challenge and hence we replace it with the non-strict inequality P (cid:23) µ p I where, < µ p (cid:28) . Without anyloss of generality: 1) We constrain the (cid:107) P (cid:107) ≤ by imposingthe constrain P (cid:22) I . 2) We normalize the RGS gain set. Weget the following optimization problem, OP2: minimize: s subject to: A Ti P + P A i − ( B i C ) T K i P − K i P ( B i C ) (cid:22) sI ∀ ( A i , B i ) ∈ V µ k ≤ K i ≤ , i = 1 , , . . . , mµ p I (cid:22) P (cid:22) I In the above problem, µ k = k m k M < . We also redefine C as C := (cid:2) k M · · · (cid:3) , to neutralize the effect ofnormalizing RGS gain set.We now define two vectors: P = (cid:2) p p · · · p n p (cid:3) T containing the n p = N ( N +1)2 distinct elements of symmetricmatrix P and K = (cid:2) K K · · · K m (cid:3) T the m normal-ized RGS Gains of OP2 . We also define two sets X P and X K as follows X P := [ − , n p ; X K := [ µ k , m Note that X P is a n p dimensional unit hypercube such that P ∈ X P and X K is a m dimensional hyper-rectangle such that K ∈ X K . We also define smaller hyper-rectangle’s Q P ⊆ X P , Q K ⊆ X K and Q ⊆ X P × X K as follows Q P := (cid:2) L P , U P (cid:3) × (cid:2) L P , U P (cid:3) × . . . × (cid:2) L n p P , U n p P (cid:3) Q K := (cid:2) L K , U K (cid:3) × (cid:2) L K , U K (cid:3) × . . . × [ L mK , U mK ] Q := Q P × Q K Obviously − ≤ L iP ≤ U iP ≤ and µ k ≤ L iK ≤ U iK ≤ .Consider the following “constrained version” of OP2 where ( P , K ) is only defined in the small hyper-rectangle Q . OP3: minimize: s subject to: A Ti P + P A i − ( B i C ) T K i P − K i P ( B i C ) (cid:22) sI ∀ ( A i , B i ) ∈ V µ k ≤ K i ≤ , i = 1 , , . . . , mµ p I (cid:22) P (cid:22) I ( P , K ) ∈ Q The input to OP3 is the set Q and its output is s ∗ ( Q ) whichis a function of Q . We want to bound s ∗ ( Q ) by an upper anda lower bound as follows: Φ L ( Q ) ≤ s ∗ ( Q ) ≤ Φ U ( Q ) (44)The convex relaxation technique introduced in [33] (alsodiscussed in [34], equation (9)) has been used to get thelower bound Φ L ( Q ) . We replace the nonlinearity K i P with a new matrix W i . As W i is a symmetric ma-trix matrix of order N we can represent it by a vec-tor W i = (cid:2) w i w i · · · w n p i (cid:3) T containing the n p All the element of the Lyapunov Matrix P will be in the range [ − , as P (cid:22) I according to OP2 . Algorithm 1 Alternating SDP Method1. Set K (0) = Centroid of Q K . (cid:0) s (0) , P (0) (cid:1) = OP3 (cid:0) Q , K (0) (cid:1) .2. if (cid:0) s (0) = ∞ (cid:1) return ∞ .4. else 5. Set δ > and k = 0 .6. do { (cid:0) s ( k +1) , P ( k +1) (cid:1) = OP3 (cid:0) Q , K ( k ) (cid:1) (cid:0) s ( k +1) , K ( k +1) (cid:1) = OP3 (cid:0) Q , P ( k +1) (cid:1) k = k + 1 .10. } while (cid:0) s ( k − − s ( k ) < δ (cid:12)(cid:12) s ( k ) (cid:12)(cid:12)(cid:1) return s ( k ) .12. end distinct elements of W i . We now define a matrix W = (cid:2) W T W T · · · W Tm (cid:3) T . If OP3 is expanded in termsof p j and K i then the element w ji of W is constrained bythe equality w ji = K i p j . Rather than imposing this equalityconstrain we let w ji to be a free variable which can take anyvalue in the set W ( Q ) defined as W ( Q ) := W (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ( P , K ) ∈ Q w ji ≥ L iK p j + L jP K i − L iK L jP w ji ≥ U iK p j + U jP K i − U iK U jP w ji ≤ U iK p j + L jP K i − U iK L jP w ji ≤ L iK p j + U jP K i − L iK U jP By performing the convex relaxation stated above we getthe following optimization problem. OP4: minimize: s subject to: A Ti P + P A i − ( B i C ) T W i − W i ( B i C ) (cid:22) sI ∀ ( A i , B i ) ∈ V µ p I (cid:22) P (cid:22) Iµ p K i Iµ k (cid:22) W i (cid:22) K i I, i = 1 , , . . . , m ( P , K ) ∈ Q W ∈ W ( Q ) The input to OP4 is the set Q and its output is Φ L ( Q ) .As OP4 is a relaxed version of OP3 , Φ L ( Q ) ≤ s ∗ ( Q ) . Notethat OP4 is a convex optimization problem, more specificallya Semi-Definite Program(SDP) which can be solved by nu-merical solvers like CVX[40].We now concentrate on defining the upper bound Φ U ( Q ) .Any local minima of OP3 can indeed be the upper bound Φ U ( Q ) . We use the Alternating SDP Method discussed in[33], [41] . Alternating SDP method relies on the Bi-Convexnature of OP3 , i.e. OP3 becomes a convex problem (morespecifically SDP) in P with K fixed or a convex problemin K with P fixed. Alternating SDP Method is summarizedin Algorithm 1 . We represent by OP3 ( Q , K (cid:48) ) the opti-mization problem obtained by fixing K = K (cid:48) in OP3 and OP3 ( Q , P (cid:48) ) the optimization problem obtained by fixing P = P (cid:48) in OP3 . The input to OP3 ( Q , K (cid:48) ) is the smallhyper-rectangle Q and the fixed RGS gain K (cid:48) while the inputto OP3 ( Q , P (cid:48) ) is the small hyper-rectangle Q and the fixedLyapunov Matrix P (cid:48) . The outputs of OP3 ( Q , K (cid:48) ) are theoptimized s and P while the outputs of OP3 ( Q , P (cid:48) ) are theoptimized s and K . Algorithm 2 Branch and Bound1. Set (cid:15) > and k = 0 .2. Set Q = X P × X K and G = {Q } .3. Set L = Φ L ( Q ) and U = Φ U ( Q ) .4. while ( U k − L k < (cid:15) ) 5. Select ¯ Q from G k such that L k = Φ L (cid:0) ¯ Q (cid:1) .6. Set G k +1 = G k − (cid:8) ¯ Q (cid:9) .7. Split ¯ Q along its longest egde into ¯ Q and ¯ Q .8. f or ( i = 1 , if (cid:0) Φ L (cid:0) ¯ Q i (cid:1) ≤ U k (cid:1) 10. Compute Φ U (cid:0) ¯ Q i (cid:1) .11. Set G k +1 = G k (cid:83) (cid:8) ¯ Q i (cid:9) .12. end end 14. Set U k +1 = min Q∈G k +1 Φ U ( Q ) .15. P runing : G k +1 = G k +1 − {Q : Φ L ( Q ) > U k +1 } .16. Set L k +1 = min Q∈G k +1 Φ L ( Q ) .17. Set k = k + 1 .18. end Under the various definations introduced above the Branchand Bound Algorithm [32] calculate the global minima of OP2 to an absolute accuracy of (cid:15) > within finite time.The psuedocode of Branch and Bound method is given in Algorithm 2 . A PPENDIX CC ONTROL T HEORETIC D EFINITIONS AND C ONCEPTS The control theoretic analysis presented in this paper relieson definitions and theorems from three broad areas. Thisconcepts are standard and can be easily found in books like[42], [36]. A. Real AnalysisDefintion 1 (Cartesian Product of Sets): Cartesian Productsof two sets A and B , denoted by A × B , is defined as the setof all ordered pairs ( a, b ) where a ∈ A and b ∈ B . Notion 1 (Connected Set): A set is said to be connected iffor any two points on that set, there exist at-least one pathjoining those two points which also lies on the set. Note thatthis is not the "general definition" of connected set.Notion 2 (Compact Set): In Eucledian Space R n , a closedand bounded set is called a compact set. Also a compact setin R n is always closed and bounded. Note that this is not the"general definition" of compact set.B. Linear AlgebraDefinition 2 (Eucledian Norm of a Vector): For a vector x ∈ R n , eucledian norm (cid:107) x (cid:107) is defined as (cid:107) x (cid:107) := √ x T x .Throughout the entire paper “norm” means “eucledian norm”unless mentioned otherwise. Definition 3 (Induced Spectral Norm of a Matrix): For amatrix A ∈ R m × n , Induced Spectral Norm (cid:107) A (cid:107) is defined as (cid:107) A (cid:107) := sup x ∈ R n −{ } (cid:107) Ax (cid:107)(cid:107) x (cid:107) It can equally be defined as (cid:107) A (cid:107) := sup (cid:107) x (cid:107) =1 (cid:107) Ax (cid:107) Definition 4 (Positive (Negative) Definite (Semi-Definite)Matrix): A square matrix A ∈ R n × n is said to be positivedefinite if x T Ax > , ∀ x ∈ R n −{ } and is said to be positivesemi-definite if x T Ax ≥ , ∀ x ∈ R n . A matrix A is said tobe negative definite is − A is positive definite and is said tobe negative semi-definite is − A is positive semi-definite. Linear Algebra Theorems: 1) Properties of norms:a) For a vector x ∈ R n , if (cid:107) x (cid:107) = 0 then x = 0 . Alsoif (cid:107) x (cid:107) → then x → .b) For any two matrix A, B ∈ R n × m , (cid:107) A + B (cid:107) ≤(cid:107) A (cid:107) + (cid:107) B (cid:107) .c) For any matrix A ∈ R n × m and a scalar α , (cid:107) αA (cid:107) = α (cid:107) A (cid:107) .d) For any two matrix A, B ∈ R n × m , (cid:107) AB (cid:107) ≤(cid:107) A (cid:107) (cid:107) B (cid:107) .e) For any matrix A ∈ R n × m and a vector x ∈ R m , (cid:107) Ax (cid:107) ≤ (cid:107) A (cid:107) (cid:107) x (cid:107) .2) Eigenvalues of symmetric positive (negative) definitematrix are positive (negative).3) For any symmetric matrix A ∈ R n × n and a vector x ∈ R n , λ m ( A ) (cid:107) x (cid:107) ≤ x T Ax ≤ λ M ( A ) (cid:107) x (cid:107) 4) For any symmetric positive definite (semi-definite) ma-trix A , (cid:107) A (cid:107) = λ M ( A ) .5) For any square matrix A , x T Ax ≤ (cid:107) A (cid:107) (cid:107) x (cid:107) C. Convex AnalysisDefinition 5 (Convex Set): A set A is said to be convex iffor any x , y ∈ A , θx + (1 − θ ) y ∈ A , ∀ θ ∈ [0 , . Definition 6 (Convex Function): Let A ∈ R n be a convexset. A function f : A → R is convex if f ( θx + (1 − θ ) y ) ≤ θf ( x ) + (1 − θ ) f ( y ) for all x , y ∈ A and for all ≤ θ ≤ . Definition 7 (Convex Combination and Convex Hull): Givena set A = (cid:8) a a · · · a n (cid:9) , its convex combination arethose elements which can be expressed as θ a + θ a + · · · + θ n a n where θ + θ + · · · + θ n = 1 and θ i ≥ , i = 1 , , . . . , n .The set of all convex combination of set A is called theconvex hull of A . Convex Hull of set A is indeed the smallestconvex set containing A . Definition 8 (Semi-Definite Program): A semi-definite pro-gram or SDP is an optimization problem in variable x = (cid:2) x x · · · x n (cid:3) T ∈ R n with the generic structure minimize : c T xsubject to : F + x F + x F + · · · + x n F n (cid:22) Ax = b where F , F , F , . . . , F n ∈ S , c ∈ R n , A ∈ R p × n and b ∈ R p . A CKNOWLEDGMENT We would like to thank Prof. Radha Krishna Ganti for pro-viding invaluable help in areas related to convex optimization.R EFERENCES[1] L. Chua, “Memristor-the missing circuit element,” IEEE Trans. CircuitTheory , vol. 18, no. 5, pp. 507–519, 1971.[2] D. B. Strukov, G. S. Snider, D. R. Stewart, and R. S. Williams, “Themissing memristor found,” Nature , vol. 453, no. 7191, pp. 80–83, 2008.[3] Y. Ho, G. M. Huang, and P. Li, “Dynamical properties and designanalysis for nonvolatile memristor memories,” IEEE Trans. Circuits Syst.I, Reg. Papers , vol. 58, no. 4, pp. 724–736, 2011.[4] Y. N. Joglekar and S. J. Wolf, “The elusive memristor: properties of basicelectrical circuits,” Eur. J. Phys. , vol. 30, no. 4, pp. 661–675, 2009.[5] F. Corinto and A. Ascoli, “A boundary condition-based approach to themodeling of memristor nanostructures,” IEEE Trans. Circuits Syst. I,Reg. Papers , vol. 60, pp. 2713–2726, 2012.[6] S. Kvatinsky, E. G. Friedman, A. Kolodny, and U. C. Weiser, “Team:Threshold adaptive memristor model,” IEEE Trans. Circuits Syst. I, Reg.Papers , vol. 60, no. 1, pp. 211–221, 2013.[7] S. H. Jo, T. Chang, I. Ebong, B. B. Bhadviya, P. Mazumder, and W. Lu,“Nanoscale memristor device as synapse in neuromorphic systems,” Nano letters , vol. 10, no. 4, pp. 1297–1301, 2010.[8] Y. V. Pershin and M. Di Ventra, “Experimental demonstration ofassociative memory with memristive neural networks,” Neural Networks ,vol. 23, no. 7, pp. 881–886, 2010.[9] A. Sheri, H. Hwang, M. Jeon, and B. Lee, “Neuromorphic characterrecognition system with two pcmo-memristors as a synapse,” IEEETrans. Ind. Electron. , vol. 61, pp. 2933–2941, 2014.[10] Y. V. Pershin, S. La Fontaine, and M. Di Ventra, “Memristive model ofamoeba learning,” Phys. Rev. E , vol. 80, no. 2, p. 021926, 2009.[11] G. Z. Cohen, Y. V. Pershin, and M. Di Ventra, “Second and higherharmonics generation with memristive systems,” Appl. Phys. Lett. , vol.100, no. 13, p. 133109, 2012.[12] Y. V. Pershin and M. Di Ventra, “Practical approach to programmableanalog circuits with memristors,” IEEE Trans. Circuits Syst. I, Reg.Papers , vol. 57, no. 8, pp. 1857–1864, 2010.[13] S. Shin, K. Kim, and S. Kang, “Memristor applications for pro-grammable analog ics,” IEEE Trans. Nanotechnol. , vol. 10, no. 2, pp.266–274, 2011.[14] D. Jeltsema and A. Doria-Cerezo, “Port-hamiltonian formulation ofsystems with memory,” Proc. IEEE , vol. 100, pp. 1928–1937, 2012.[15] D. Jeltsema and A. J. van der Schaft, “Memristive port-hamiltoniansystems,” Math. Comput. Model. Dyn. Syst. , vol. 16, pp. 75–93, 2010.[16] J. Hu and J. Wang, “Global uniform asymptotic stability of memristor-based recurrent neural networks with time delays,” in Proc. Int. JointConf. Neural Netw. , 2010, pp. 1–8.[17] A. Delgado, “The memristor as controller,” in IEEE NanotechnologyMaterials and Devices Conference , 2010, pp. 376–379.[18] J. S. A. Doria-Cerezo, L. van der Heijden, “Memristive port-hamiltoniancontrol: Path-dependent damping injection in control of mechanicalsystems,” in Proc. 4th IFAC Workshop on Lagrangian and HamiltonianMethods for Non Linear Control , 2012, pp. 167–172.[19] R. Pasumarthy, G. Saha, F. Kazi, and N. Singh, “Energy and powerbased perspective of memristive controllers,” in Proc. IEEE Conf. Decis.Contr. , 2013, pp. 642–647.[20] J. J. Gorman and B. Shapiro, Feedback control of mems to atoms .Springer, 2011.[21] E. Ozalevli and P. E. Hasler, “Tunable highly linear floating-gatecmos resistor using common-mode linearization technique,” IEEE Trans.Circuits Syst. I, Reg. Papers , vol. 55, pp. 999–1010, 2008.[22] L. O. Chua and S. M. Kang, “Memristive devices and systems,” Proc.IEEE , vol. 64, no. 2, pp. 209–223, 1976.[23] T. A. Wey and W. D. Jemison, “Variable gain amplifier circuit usingtitanium dioxide memristors,” IET Circuits, Devices & Syst. , vol. 5,no. 1, pp. 59–65, 2011.[24] S. Haykin, Communication systems , 5th ed. Wiley Publishing, 2009.[25] S. P. Boyd, Linear matrix inequalities in system and control theory .SIAM, 1994.[26] A. Ben-Tal and A. Nemirovski, Lectures on modern convex optimization:analysis, algorithms, and engineering applications . SIAM, 2001.[27] H. H. Rosenbrook, “The stability of linear time-dependent controlsystems,” Int. J. Electron. & Contr. , vol. 15, pp. 73–80, 1963. [28] V. Solo, “On the stability of slowly time-varying linear systems,” Math.of Contr., Signals & Syst. , vol. 7, pp. 331–350, 1994.[29] C. B. Barber, D. P. Dobkin, and H. Huhdanpaa, “The quickhull algorithmfor convex hulls,” ACM Trans. on Mathematical Software Proc. Amer. Contr. Conf. , 1999, pp. 1385–1389.[31] Y.-Y. Cao, J. Lam, and Y.-X. Sun, “Static output feedback stabilization:an ilmi approach,” Automatica , vol. 34, no. 12, pp. 1641–1645, 1998.[32] K.-C. Goh, M. Safonov, and G. Papavassilopoulos, “A global optimiza-tion approach for the bmi problem,” in Proc. IEEE Conf. Decis. Contr ,vol. 3, 1994, pp. 2009–2014.[33] K. H. H. Fujioka, “Bounds for the bmi eingenvalue problem - a goodlower bound and a cheap upper bound,” Trans. Society of Instr. & Contr.Eng. , vol. 33, pp. 616–621, 1997.[34] M. Michihiro Kawanishi and Y. Shibata, “Bmi global optimization usingparallel branch and bound method with a novel branching method,” in Proc. Amer. Contr. Conf. , 2007, pp. 1664–1669.[35] K. Ogata, Modern Control Engineering , 5th ed. Prentice Hall, 2010.[36] S. Boyd and L. Vandenberghe, Convex optimization . Cambridgeuniversity press, 2009.[37] R. T. Rockafellar, Convex analysis . Princeton university press, 1997.[38] A. Hassibi, J. How, and S. Boyd, “Low-authority controller design viaconvex optimization,” in Proc. IEEE Conf. Decis. Contr. , 1998, pp.1385–1389.[39] I. Barkana, “Simple adaptive control-a stable direct model referenceadaptive control methodology-brief survey,” Int. J. Adapt. Contr. SignalProc. , 2013.[40] I. CVX Research, “CVX: Matlab software for disciplined convexprogramming, version 2.0,” http://cvxr.com/cvx, Aug. 2012.[41] M. K. Mituhiro Fukuda, “Branch-and-cut algorithms for the bilinearmatrix inequality eigenvalue problem,” Comput. Optim. Appl , vol. 19,p. 2001, 1999.[42] E. Kreyszig, Introductory functional analysis with applications . Wiley,1989. Gourav Saha received the B.E. degree from AnnaUniversity, Chennai, India, in 2012. Since 2013, hehas been working towards his M.S. (by research)degree in the Department of Electrical Engineeringat Indian Institute of Technology, Madras.His current research interests include applicationof memristor and memristive systems, control ofMEMS and modelling and control of cyber-physicalsystems with specific focus on cloud computing. Ramkrishna Pasumarthy obtained his PhD in Sys-tems and Control from the University of Twente, TheNetherlands in 2006. He is currently an AssistantProfessor at the Department of Electrical Engineer-ing, IIT Madras, India.His research interests lie in the area of modelingand control of physical systems, infinite dimensionalsystems and control of computing systems, withspecific focus on cloud computing systems.