A Multi-Stage Adaptive Sampling Scheme for Passivity Characterization of Large-Scale Macromodels
Marco De Stefano, Stefano Grivet-Talocia, Torben Wendt, Cheng Yang, Christian Schuster
aa r X i v : . [ c s . C E ] N ov A Multi-Stage Adaptive Sampling Scheme forPassivity Characterization of Large-ScaleMacromodels
Marco De Stefano
Student Member, IEEE , Stefano Grivet-Talocia,
Fellow, IEEE , Torben Wendt,
StudentMember, IEEE , Cheng Yang,
Member, IEEE , Christian Schuster,
Senior Member, IEEE
Abstract —This paper proposes a hierarchical adaptive sam-pling scheme for passivity characterization of large-scale linearlumped macromodels. Here, large-scale is intended both interms of dynamic order and especially number of input/outputports. Standard passivity characterization approaches based onspectral properties of associated Hamiltonian matrices are eitherinefficient or non-applicable for large-scale models, due to anexcessive computational cost. This paper builds on existingadaptive sampling methods and proposes a hybrid multi-stagealgorithm that is able to detect the passivity violations withlimited computing resources. Results from extensive testingdemonstrate a major reduction in computational requirementswith respect to competing approaches.
I. I
NTRODUCTION
Macromodeling techniques are generally considered as keyenabling factors for the efficient simulation of complex devicesand systems. A macromodel approximates the input-output re-sponse of a given structure through a compact set of behavioralequations. Such equations are most commonly derived throughsome fitting process [1] applied to a set of sampled responses,usually in the frequency domain. See [2] for an overview.The focus of this work is on macromodeling of passivestructures, which are unable to generate energy on theirown [3]–[6]. Model passivity must be therefore checked andenforced through suitable constraints, which are usually ap-plied through a perturbation step applied to an initially non-passive model obtained by some unconstrained fitting [7]–[19].Lack of model passivity can trigger instabilities in transientanalyses [20] and must be therefore avoided with care.Macromodels are intended to be compact and reduced-order,hence small-size. Nonetheless, the robustness of state of theart macromodeling algorithms has enabled applications thatare aggressively driving model complexity beyond current ca-pabilities. One typical example can be full-package modelingfor signal, power or coupled signal-power integrity analysis,in which case the number of input/output ports can reachhundreds or even thousands [21]. A second application ismodeling of energy-selective electromagnetic shielding struc-tures [22] formed by conventional enclosures having openingsthat are closed by regular grids of nonlinear elements [23]–[27]. An efficient time-domain simulation calls for a macro-model that represents in a compact way the electromagneticbehavior of the shield, and also in this case the number ofinput/output ports can reach the order of thousands or more. Afurther complication is provided by the possibly high dynamic order (number of poles) of the macromodels, as required byaccuracy requirements over extended frequency bands.Recent Vector Fitting (VF) [1] implementations [28], [29]including parallel codes [30], [31] have been demonstrated toscale very favorably with the model size. Conversely, standardpassivity enforcement schemes are more critical and lessscalable, despite the advancements that have been documentedover the last two decades [32]. The major bottleneck is thepassivity verification step, which must detect not only whethera given initial model is passive, but should provide alsoa precise localization of the passivity violations so that asuccessive perturbation stage can be setup to remove them.Complexity of this model perturbation step is less critical,so passivity enforcement is not considered here. Passivitycharacterization is far more demanding in terms of compu-tational resources, both memory and runtime, and thereforemore problematic. To the best of Authors’ knowledge, thereis no documented passivity characterization approach that canprovide reliable results in a limited runtime and with limitedmemory occupation, for systems with input-output ports in theorder of hundreds and possibly exceeding one thousand.The most prominent passivity characterization approachescan be roughly grouped in three main classes, based re-spectively on Linear Matrix Inequalities (LMI) [6], directsampling, and Hamiltonian matrix properties [16], [33]. LMIsprovide the best approach under a theoretical standpoint, butapplicability is limited to small-scale systems. Pure samplingapproaches are very fast and provide a potentially goodscalability, but they are likely to miss passivity violationsmaking the verification not reliable. Hamiltonian matriceswith their spectral properties are the usual method of choicefor passivity verification, for small and medium-scale prob-lems. Extension to large-scale systems has been attemptedin [17], possibly through hybridization, pre- or post-processingwith adaptive sampling methods [18]. However, the necessaryKrylov subspace iterations for the spectral characterizationof large and sparse Hamiltonian matrices makes the entireapproach infeasible when the number of ports exceeds aboutone hundred.The above difficulties motivate the passivity verification ap-proach that we propose in this work. We build on the adaptivesampling scheme of [18], and we try to avoid completely theuse of Hamiltonian matrices and fully coupled LMI conditions,which are the main responsible for driving computational costbeyond what is usually affordable in design flows. The main objective that we pursue is to construct an adaptive samplingscheme that is fast and at the same time reliable. We shouldstress that there is no mathematical guarantee that checkinglocal passivity conditions on a finite number of points willnot miss a violation. Therefore, the main objective of thiswork is to make the presence of undetected passivity violationsextremely unlikely. This is the reason why we propose a hybridscheme that combines different basic sampling methods in ahierarchical way.The proposed algorithm performs an initial pole-based sam-pling intended to track fast variations. The results are then usedto produce an adaptive frequency warping and subdivision on apossible large number of subbands, which are then subjected tohierarchical refinement [34]. The proposed scheme was testedagainst a large number of regression test cases, with a numberof ports ranging from P = 1 to P = 640 and a dynamicorder (number of states) ranging from N = 10 to morethan × . The results show that standard Hamiltonian-based approaches remain competitive and should be preferredfor small-scale systems. Conversely, the proposed approachoutperforms Hamiltonian tests for medium and large-scalesystems, with speedup in runtime reaching and exceeding × for the largest cases.This paper is organized as follows. Section II provides somebackground information and sets notation. Section III statesthe main problem by providing an illustrative example thathighlights the main difficulties to be addressed. Section IVintroduces the adaptive frequency warping scheme that formsthe initial step of proposed sampling scheme. Section Vintroduces the multi-scale search to be applied as a second-stage adaptive refinement. Numerical results are presented inSection VI and conclusions are drawn in Section VII.II. B ACKGROUND AND N OTATION
Throughout this work we consider a lumped Linear TimeInvariant (LTI) model, which is known either through a pole-residue expansion or a state-space realization H ( s ) = ¯ n X n =1 R n s − p n + R = C ( s I − A ) − B + D (1)where s is the Laplace variable and the transfer matrix H ( s ) ∈ C P × P with P number of input-output ports. The dynamicorder of the model is denoted as N , which is the size of thestate-space matrix A . In case the residue matrices R n arefull-rank, we have N = ¯ nP , see [2], [35], [36] for details. Ingeneral, N ≫ P . We assume that the bandwidth where themodel is assumed and intended to be accurate is [0 , ω max ] ,and that all model parameters (poles-residues or state-spacematrices) are available from some initial fitting/approximationstage. We assume that all state-space matrices are real-valuedand that the model poles p n (equivalently, the eigenvalues of A ) are strictly stable. This condition is easily enforced bywidespread fitting schemes such as VF [1]. A. Passivity verification
We will consider only models (1) in scattering repre-sentation, so that H ( s ) approximates the scattering matrix of the modeled system. This is not a limitation, since theproposed approach can be extended to other representations(impedance, admittance, or hybrid) with simple modifications.See [2] for a complete theoretical framework applicable toall representations. We review below the three main availableapproaches for passivity verification.
1) Sampling local passivity conditions:
For scattering mod-els (1) with real-valued realization and strictly stable poles thepassivity conditions can be stated as the unique constraint σ max { H (j ω ) } ≤ γ = 1 ∀ ω ∈ R . (2)This condition expresses that the maximum singular value ofthe scattering matrix must not exceed the threshold γ = 1 at any real frequency. Under the working assumptions, (2) isa sufficient condition for H ( s ) to be Bounded Real , hencepassive [3], [5], [6].Sampling approaches test (2) over a finite number of fre-quency points { ω k , k = 1 , . . . , K } . Determination of σ max at a single frequency requires O ( P ) operations, hence theoverall complexity for K samples amounts to O ( KP ) . Sinceeach individual frequency ω k can be processed independently,there is no significant memory cost, even for large P . The mainadvantage of this sampling-based check is the availability ofall local maxima of σ max , obtained as a trivial post-processing.These maxima are essential to setup passivity enforcementthrough constrained perturbation. The main disadvantage ofchecking (2) over K finite points is the possibility to misslocal maxima exceeding γ = 1 by a small amount or over asmall frequency extent, due to the limited number of pointsthat can be processed.We remark that the (scalar) function of frequency ϕ ( ω ) = σ max { H (j ω ) } (3)henceforth denoted as passivity metric is generally smooth anddifferentiable, with the possible exception of a finite number ofpoints where two singular values cross. In the latter case, ϕ ( ω ) is guaranteed to be continuous since all singularties (poles) of H ( s ) fall outside the imaginary axis.
2) Linear Matrix Inequalities:
LMI conditions provide afully algebraic test that does not require sampling. We recallthat the scattering system (1) is passive (dissipative) if andonly if ∃ P = P T > such that (cid:18) A T P + PA + C T C PB + C T DB T P + D T C − ( I − D T D ) (cid:19) ≤ . (4)This condition, known as Bounded Real Lemma (BRL) orKalman-Yakubovich-Popov (KYP) Lemma [6], [37], has themajor disadvantage of requiring the determination of the N × N matrix P related to the internal energy storage.The computational cost for checking the feasibility of (4) is O ( N ) , which can be reduced to O ( N ) with specializedformulations [38]. This cost becomes exceedingly high forlarge-scale systems. One additional disadvantage of (4) is thatthis is a pass/fail test which does not provide localization ofpassivity violations that can be used in a successive enforce-ment. Passivity enforcement schemes exist that use directly (4)as constraint [7], [39], but these inherit the poor scalabilitywith large P and N , see [19]. TABLE IA
SYMPTOTIC SCALING FACTORS OF COMPUTATIONAL COSTS FORDIFFERENT PASSIVITY CHARACTERIZATION APPROACHES
Characterization method OperationsLinear Matrix Inequalities [38] O ( N ) Hamiltonian (full) O ((2 N ) ) Sampling O ( K ( P + P ¯ n )) N : model order (number of states) P : number of input-output ports K : number of frequency samples ¯ n : number of (common) poles
3) Hamiltonian matrices:
Spectral properties of Hamilto-nian matrices provide the standard approach for checking pas-sivity of small and medium-scale models. Under the workingassumptions, the model is (strictly) passive if and only if the
Hamiltonian matrix M = (cid:18) A + BR − D T C BR − B T − C T S − C − A T − C T DR − B T (cid:19) . (5)with R = I − D T D and S = I − DD T has no purelyimaginary eigenvalues µ k = jˆ ω k . Such eigenvalues denotethe crossing points of any singular value of H (j ω ) with thepassivity threshold γ = 1 , see [16], [33]. Therefore, theyprovide also localization of passivity violations. Unfortunately,the eigenvalue spectrum of M is required, whose numericaldetermination scales as O ((2 N ) ) . Extension to large-scaleand sparse (decompositions) was documented in [17], [18],allowing some improvement for systems with moderate P and possibly large ¯ n . Extension to large port count P appearsproblematic, since the Krylov subspace iterations required fordetection of purely imaginary eigenvalues of M require therepeated inversion of full P × P matrices, leading to a costthat once again becomes impractical in the large-scale case.We remark that (5) is applicable only when state-spacematrix D has no singular values too close to γ = 1 . Inthis case, it is preferable to process the extended Hamiltonianpencil ( M e , K ) defined as [40] M e = A 0 B 00 − A T − C T T − I D T C 0 D − I K = I 0 0 00 I 0 00 0 0 00 0 0 0 (6)to find its purely imaginary generalized eigenvalues µ k . Someof the test cases that will be investigated in Section VI fall inthis situation, in which the standard Hamiltonian matrix (5)is ill-defined. Throughout this work, we use (6) only when | σ max { D } − | < − , since the associated computationalcost is higher than for (5). B. Discussion
Table I summarizes the number of elementary operations forthe three leading passivity characterization approaches. Onlythe leading terms are included, and the number of floatingpoint operations is reported up to a constant factor whichdepends on the specific algorithm or numerical library usedto implement each method. As an example, let us consider a system with P = 400 ports and ¯ n = 50 poles (hence N = 20000 ). The LMI andthe Hamiltonian approaches would require about 3 GB and12 GB of storage, respectively, with a relative CPU cost withrespect to sampling a huge number K = 10 of frequenciesabout · × and × , respectively. From this example andTable I, it is evident that the only framework that has thepotential to scale favorably with the problem size is thesampling approach. Therefore, we concentrate our efforts onthe sampling scheme, with the main objective of reducing thelikelihood of missing passivity violations and classifying of anon-passive model as passive.III. P ROBLEM STATEMENT AND GOALS
Considering the passivity condition (2) and the definition ofthe passivity metric (3), we can state the main problem that ishere addressed as: find all local maxima larger than a giventhreshold of a continuous univariate function of frequency overthe entire frequency axis.
This problem may appear simple, butvarious difficulties arise in practical implementation. These arediscussed below with reference to the top panel of Fig. 1 andits enlarged view in the top panel of Fig. 2. • The passivity metric ϕ ( ω ) is usually characterized bymultiple peaks, and it is necessary to identify all of themin order to check if they exceed the threshold γ . It isimperative not to miss any local maximum. • Some peaks can be extremely narrow, hence they can bealmost invisible with an inadequate sampling. • Some peaks may be characterized by a local maximumthat is very close to the threshold; in such cases, it may bedifficult to detect whether there exists a passivity violationwith ϕ ( ω ) > γ at some frequency point.The above difficulties can only be addressed by adaptingthe sampling process, in order to automatically refine theresolution where ϕ ( ω ) is characterized by fast variationsand/or its values are close to the threshold γ . At the sametime, it is mandatory to limit as much as possible the totalnumber K of computed samples.These objectives are here pursued through a multi-stageadaptive sampling scheme. First, a preprocessing step isapplied to split the frequency band into a possibly largenumber of subbands, whose individual extent is adaptivelydefined and rescaled so that the total variation of the passivitymetric becomes nearly uniform (see Section IV). Second,a hierarchical tree-based refinement is applied within eachindividual subband to track all local maxima (see Section V).The combination of these two stages will be demonstrated tooutperform competing approaches, still retaining (and in somecases even improving) the reliability of algebraic passivityverification methods.IV. S TEP POLE - BASED ADAPTIVE FREQUENCY WARPING
A preview of the results of this first step is availablein the bottom panels of Fig. 1 and its enlarged view ofFig. 2. We want to identify a nonlinear (invertible) frequencytransformation ζ = W ( ω ) that induces the following changeof variable θ ( ζ ) = ϕ ( W − ( ζ )) (7) Fig. 1. Top panel: graphical illustration of the adaptive sampling-basedpassivity characterization. Bottom panel: rescaled passivity metric θ ( ζ ) afteradaptive frequency warping. with the main objective of “flattening” the local variations of ϕ ( ω ) , so that the resulting θ ( ζ ) is characterized by peaks withan approximately uniform width.The change of variable is constructed based on a set ofcontrol points Ω = { ˆ ω ℓ , ℓ = 0 , . . . , L } such that ω < ˆ ω < · · · < ˆ ω ℓ < ˆ ω ℓ +1 < · · · < ˆ ω L = ∞ (8)These points are used to build a local linear map betweenthe normalized subband ζ ∈ [ ℓ, ℓ + 1] and the correspondingsubband ω ∈ [ˆ ω ℓ , ˆ ω ℓ +1 ] . The global change of variable W isconstructed piecewise as ζ = ℓ + ω − ˆ ω ℓ ∆ ℓ ∀ ω ∈ [ˆ ω ℓ , ˆ ω ℓ +1 ] , ℓ = 0 , . . . , L − (9)where ∆ ℓ = ˆ ω ℓ +1 − ˆ ω ℓ is the extent of the ℓ -th subband. Thelast (infinite-sized) subband is handled as ζ = ℓ + ω − ˆ ω ℓ ω ∀ ω ∈ [ˆ ω ℓ , ˆ ω ℓ +1 ] , ℓ = L − (10)The piecewise linear frequency warping function W mapsthe entire frequency axis ω ∈ [0 , + ∞ ) into the normalizedinterval [0 , L ] . The effects are visible by comparing top andbottom panels of Fig. 1 or Fig. 2 (enlarged view). The controlpoints highlighted with crosses in the top panels are mappedto uniformly distributed points in the bottom panels. Whereverthe density of the control points is higher, corresponding to fastvariations of ϕ ( ω ) , the final effect after renormalization willbe to stretch narrow peaks and shrink wide peaks, so that alllocal maxima in the normalized domain ζ are located at peakswith a comparable and statistically uniform width. We see Fig. 2. Enlarged view of Figure 1. from Fig. 2 that the number of local maxima in each subbandis at most few units (usually only one), with a most probablesituation characterized by this maximum occurring at one edgeof the subinterval. This favorable situation is guaranteed by acareful selection of the control points, discussed below.
A. Choosing control points
The strategy for constructing the control points ˆ ω ℓ aims atreducing ∆ ℓ in those regions that are characterized by fastvariations of ϕ ( ω ) , and conversely relaxing and enlarging ∆ ℓ where ϕ ( ω ) is slowly-varying or nearly constant. The main un-derlying assumption is that variations of the singular values areinduced by variations in the transfer matrix elements, whichin turn are induced by the location of each pole/residue term.Therefore, we start with the pole-based sample distributiondiscussed in [18], which for each pair of complex conjugatepoles p n = α n ± j β n contributes the following candidatefrequency samples ω n,r = β n + α n tan rπ R + 1) r = − R, · · · , R (11)The parameter R is used to control how many samples aredue to each pole term in a partial fraction expansion of themodel response. Only frequencies ω n,r ≥ are retained andassembled in a single set Ω , that is then sorted in ascendingorder. A final scan is performed to remove samples that arecloser than a predefined resolution ∆ ω , which is defined asin [18] as ∆ ω = p max N ρ (12)where ρ ≫ is another control parameter and p max =max( ω max , max n | p n | ) . The above-defined set Ω was tested and proved to beinadequate for a pure sampling-based passivity check. Inparticular,1) the parameters setting suggested in [18] results in alarge set of subbands, improving the robustness of theprocedure but reducing its overall speed. A more efficientcheck would call for small R . This strategy would providean adequate sample distribution in band, but the passivitymetric variations for ω > ω max might not be sampledappropriately.2) variations induced by real poles are not adequately sam-pled by (11).3) poles that are very close to the imaginary axis leadto subbands with very small ∆ ℓ , whose edges maybe removed accidentally by (12), leading to inaccuratesampling.The above issues 1 and 2 are here solved by introducing threeseparate control parameters R ν with ν = { cp , rp , hf } to beused respectively for in-band complex pole pairs, real poles,and poles with either real or imaginary part magnitude that isclose to or larger than ω max . In particular, • R cp can be safely chosen very small, even R cp = 1 ; • R rp should be larger, with at least R rp ≥ ; • R hf should also be larger, with at least R hf ≥ .Issue 3 is partially compensated by modifying ω n,r corre-sponding to highly resonant poles in (11) by setting α n ← c · α n if Q n ≈ | β n | | α n | > Q max (13)where Q max is here set to and c = 50 . Finally, sincethe entire real line ω ∈ R needs to be sampled, we add ω = ∞ together with the following κ + 1 logarithmically spacedsamples extending d decades beyond the fitting model band ω ν = ω max · d νκ ν = 0 , · · · , κ (14)where d is an additional control parameter. Typical values are d ≥ . and κ ≥ .In summary, frequency warping and renormalization is hereachieved by a pole-based control point distribution, whosebehavior can be controlled by the eight parameters summa-rized in Table II. The above guidelines and parameter settingshave been heuristically proven to be quasi-optimal over alarge number of test cases, see Section VI. It should beconsidered that, when inserted in a passivity enforcement loop,the number of subbands can be adaptively increased as thepassivity violations are iteratively removed and become moredifficult to identify, simply by increasing the values of R ν , asdiscussed and illustrated in Sec. V-B through an applicationexampleV. S TEP
2: A M
ODIFIED N AIVE M ULTI - SCALE S EARCH O PTIMIZATION
The second step of proposed passivity verification algorithmapplies a hierarchical adaptive sampling process to the rescaledpassivity metric θ ( ζ ) within each normalized subband ζ ∈X ℓ = [ ℓ, ℓ + 1] , with the main objective of identifying all localmaxima that exceed the passivity threshold γ = 1 . h = 0 h = 1 h = 2 h = 3 ζ = 0 ζ = 1 ζ h,i { ζ h +1 ,j , j ∈ J h,i } Fig. 3. Graphical illustration of a M -tree with M = 3 . All leaves up tolevel h = 3 are denoted by dots. Filled dots represent leaves that have beenevaluated (set E h ). Red dots denote all leaves currently in the tree, whereasyellow dots are nodes that have been expanded and relabeled ( M odd). Among the several adaptive algorithms offered by theliterature, our starting point is the so-called Naive Multi-scaleSearch Optimization (NMSO) algorithm [34], which is basedon a tree-search divide-and-conquer strategy. Although basedon adaptive tree refinement, the NMSO algorithm may sufferfrom the presence of many local maxima, since its main objec-tive is to find the unique global maximum in its search domain.This is the main reason why we performed the subbandsplitting and frequency warping in Section IV. Each subband X ℓ = [ ζ ℓ , ζ ℓ +1 ] is constructed to include a very limited numberof local maxima, usually only one. Therefore, we propose arepeated and independent application of a suitably constructedNMSO scheme to each subband X ℓ independently, so that theidentification of all local maxima is greatly simplified. Thediscussion below focuses on X = X = [0 , without loss ofgenerality.The proposed algorithm builds a tree T whose nodes areiteratively refined at multiple scales h ≥ , in order to partitionthe search space X at any level h into a set of M h cells X h,i = [ iM − h , ( i + 1) M − h ] , i = 0 , . . . , M h − . (15)Each cell has a center point ζ h,i = ( i + ) M − h , denoted as leaf or node in the following. The partition factor M ≥ definesthe number of children to be expanded from each leaf of thetree when increasing refinement level. The main objective isto find all local maxima of θ ( ζ ) over the search space X , byevaluating θ h,i = θ ( ζ h,i ) at a minimal number of tree leaves,which are determined adaptively through hierarchical refine-ment of the tree. Figure 3 provides a graphical illustration,where the entire set L h of leaves of a complete tree at a givenmaximum refinement level h is represented by dots, while theset E h ⊆ L h of evaluated leaves is depicted by filled dots. A. Adaptive sampling rules
The various assumptions and features of proposed schemeare illustrated below, whereas a pseudocode version is avail-able in Algorithm 1.
Algorithm 1
Find θ max = arg max θ ( ζ ) for ζ ∈ [0 , Require: h , M , δζ , δθ , δη , ε , ̺ , n e , δn e Initialize µ = 0 , h ← h , B = ∅ Initialize C = { ζ h,i , i = 0 , .., M h − } , K = M h Initialize θ max = −∞ while C µ = ∅ do Choose current point ζ h,ı ∗ via (20) Expand current point via (19) K ← K + M − Update θ max ← max { θ max , { ζ h +1 ,j , j ∈ J h,ı ∗ }} if K > n e then if U AND ( U OR U ) then update budget via (34) update ε ← ̺ ε else go to 27 end if end if if S OR S OR S then insert new points in basket via (29) h ← h min reset ε else flag new points for refinement via (27) h ← h + 1 end if µ ← µ + 1 end while return samples E µ = C µ ∪ B µ and maximum θ max
1) General structure:
The dynamic evolution of the treeis performed through successive iterations with index µ . Atany iteration, the set of evaluated nodes E µ is characterizedby three levels • h min : the minimum level of any node in E µ • h max : the maximum level of any node in E µ • h : the level of the node being processed, also denoted as current node ; h is henceforth denoted as current level .The elements of E µ are split into two subsets E µ = C µ ∪ B µ (16)where C µ collects candidates for refinement at the next itera-tion, and B µ forms the so-called basket , which includes thosenodes that do not need to be refined since a stop condition isverified (see below).
2) Constraints:
We aim at minimizing the number offunction evaluations. Therefore, we allow a total initial budget n e of function evaluations. Another constraint on which webuild our implementation is to enforce M to be odd.
3) Initialization:
The algorithm is initialized for µ = 0 bychoosing an initial refinement level h ≥ and evaluating thepassivity metric θ (our target function) at the correspondingnodes { ζ h ,i , i = 0 , . . . , M h − } . Such samples are insertedinto E = C , with the initial basket B = ∅ .
4) Expansion (refinement):
The expansion process splits agiven cell X h,i into its M children at level h + 1 , denoted as X h +1 ,j , j ∈ J h,i = { M i, . . . , M ( i + 1) − } (17)The set J h,i collects the indices of the nodes within coneof influence of node ζ h,i under refinement by one level. Weremark that using an odd partition factor M guarantees thatthe midpoint of the central children cell coincides with that ofthe expanded leaf ζ h +1 ,Mi + ⌊ M/ ⌋ = ζ h,i , (18)which does not need to be re-evaluated but simply relabeled,see Fig. 3. Note also that elimination of node ζ h,i upon itsrefinement may trigger an update of h min as a consequenceof this relabeling. Refinement of one leaf ζ h,i can thus beexpressed as E µ +1 = ( E µ − { ζ h,i } ) ∪ { ζ h +1 ,j : j ∈ J h,i } (19)
5) Choosing leaves for refinement:
One key operation is theselection of best node for refinement among a set of candidatesin C µ . This is simply the leaf corresponding to the largest valueof the passivity metric ζ h,ı ∗ = arg max { θ ( ζ h,i ) : ζ h,i ∈ C µ } (20)selected among all leaves at the current level h . This choiceis motivated by the main objective of finding local maxima.
6) Stopping conditions, classification, and restarts:
Theproposed implementation checks the nodes generated uponrefinement (19) to determine if they need to be further refined.This is achieved through various stopping conditions • condition S : a target resolution has been reached. Con-sidering current level h being refined into nodes at levels h + 1 this condition is expressed as M − h − < δζ (21)where δζ is a control parameter. • condition S : the largest variation among adjacent refinednodes is within a prescribed tolerance. Denoting ∆ h,i = max {| θ h +1 ,j +1 − θ h +1 ,j | , j = M i, . . . , M ( i +1) − } (22)condition S can be expressed as ∆ h,i < δθ (23)where δθ is a control parameter. • condition S : the largest variation among adjacent refinednodes is less than their distance from the passivity thresh-old ∆ h,i < | b θ h,i − γ | , (24)where b θ h,i = max { θ ( ζ h +1 ,j ) , j ∈ J h,i } (25)This latter condition is only considered when a minimumresolution has been achieved M − h − < δη (26)where δη is control parameter. After a refinement process, if all conditions S , S , S arefalse, all nodes ζ h +1 ,j obtained from refinement are flaggedas potentially critical and will be candidates for refinement atthe next iteration, so that C µ +1 = ( C µ − { ζ h,i } ) ∪ { ζ h +1 ,j , j ∈ J h,i } (27)At the same time, current level is increased as h ← h + 1 (28)Otherwise, the new nodes ζ h +1 ,j from refinement are flaggedas non-critical and are inserted in the basket B µ +1 = ( B µ − { ζ h,i } ) ∪ { ζ h +1 ,j , j ∈ J h,i } . (29)These nodes will not be further refined. The latter conditionwill trigger a restart by resetting current level to h ← h min (30)This condition enhances the exploration capabilities of thesearch.
7) Budget updates:
In principle, when the budget of func-tion evaluations is reached upon a refinement step, the algo-rithm should stop. However, it may be possible that somecritical regions to be explored or refined are still present.Therefore, a budget update (increase) is triggered by the threeconditions below • condition U : all nodes from last refinement are detectedas passive b θ h,i < γ (31) • condition U : at least one node from last refinement iscloser to the passivity threshold than a prescribed relativethreshold ε γ − b θ h,i b θ h,i < ε (32) • condition U : the largest variation among adjacent refinednodes exceeds their smallest distance from the passivitythreshold | γ − b θ h,i | < ∆ h,i (33)As for condition S , this latter condition is only checkedwhen a minimum resolution has been achieved accordingto (26).When U holds and one among U , U is verified, budgetupdate is achieved by setting n e ← n e + δn e (34)where δn e is a control parameter that can even be iteration-dependent. In order to preserve the effectiveness of condition U when the tree level h increases, any budget update (34)triggers a reduction of ε by a factor ̺ < as ε ← ̺ε . Theoriginal value of ε is restored upon restarts. TABLE IIC
ONTROL PARAMETERS OF PROPOSED STEP -1 FREQUENCY WARPING ANDNORMALIZATION ALGORITHM . T
HREE DIFFERENT SETTINGS ARESUGGESTED , TO EMPHASIZE SPEED ( soft ), ACCURACY ( hard ) AND TOPERFORM MODEL QUALIFICATION ( final ).Step 1: Frequency Warping Mode ρ R cp R rp R hf c Q max κ d soft hard ∞ final ∞ ONTROL PARAMETERS OF PROPOSED STEP -2 ADAPTIVE SAMPLINGALGORITHM . T
HREE DIFFERENT SETTINGS ARE SUGGESTED , TOEMPHASIZE SPEED ( soft ), ACCURACY ( hard ) AND TO PERFORM MODELQUALIFICATION ( final ).Step 2: Modified NMSO Mode
M δζ , δθ δη ε ̺ n e soft − − − , , , . . . , hard − − − , , . . . , final − − − , , . . . ,
8) Basket reuse:
The above-listed strategies prevent reusingand further refining the leaves in the basket B µ , which is ap-propriate when execution speed is to be privileged with respectto accuracy. Nevertheless, the procedure can be easily modifiedto include the elements of B µ in the set of candidates forfuture expansions C µ +1 , as in the original NMSO scheme [34].This basket reuse strategy should be used for more aggressivesampling, such as required for model qualification at the endof passivity enforcement loops (see below). B. Implementation: choosing algorithm parameters
The proposed two-step adaptive sampling algorithm is con-trolled by two sets of parameters for step-1 and step-2. Theseparameters are collected in Tables II and III, respectively.Both tables report our suggestion for three possible executionmodes. Settings labeled as soft are adequate for passivitychecks that are executed at the beginning of a passivityenforcement loop, where only the largest passivity violationsare of interest and an aggressive accuracy is not necessary.Conversely, settings labeled as hard are adequate when resultsare expected to be very accurate, and passivity violationsare very small, such as at the last iteration(s) of passivityenforcement loops. Settings labeled as final are adequate forfinal model qualification.The differences between the two soft and final settingsin algorithm performance are illustrated through an example.Figure 4 compares the results obtained by both settings on arepresentative test case, precisely P = 92 ports, with a model having ¯ n = 16 pole-residue terms and a corresponding state-space matrix size N = 1472 .The proposed adaptive sampling check in soft mode re-sulted in L = 30 control points and K = 2255 evaluatedfrequency samples, of which denoting passivity violationswith ϕ ( ω ) > γ . Algorithm in hard mode requested K = 4245 samples with L = 55 and violation points, whereas final Fig. 4. Test case θ ( ζ ) after adaptive frequency warping. mode led to K = 9827 samples evaluation, with L = 55 and violation points. It should be noted that the above detectedviolation points can be at the edges of each subband X ℓ , inwhich case they should be dropped since not correspondingto actual local maxima over the entire frequency axis. Aftersuitable postprocessing, the actual number of detected localmaxima largest than the passivity threshold resulted , ,and after soft , hard , and final mode checks, respectively.These results are consistent with the asymptotic outcome ofthe algorithm used in soft mode, imposing n e = ∞ . Figure 4and its enlarged view of Fig. 5 compare results of soft and final mode, showing that the latter provides a more accuratedetection of even very small violations over narrow frequencybands. VI. N UMERICAL R ESULTS
We document now the performance of proposed algorithmon a large set of test cases. Section VI-A presents the resultsof a systematic testing campaign on a database of interconnectmacromodels, while Sections VI-B and VI-C concentrate ontwo representative large-scale application examples. All docu-mented numerical results have been obtained on a Workstationbased on Core i9-7900X CPU running at 3.3 GHz with 64 GBRAM.
A. Consistency and performance
The proposed passivity check algorithm was applied to alarge set of benchmarks, including 447 models of differentsize and complexity (details in the bottom panel of Fig. 7). Atotal of 243 models form a database constructed by applying
Fig. 5. Zoom of Fig. 4 on a frequency band with small passivity violations. a standard VF scheme as available in the commercial softwareIdEM [41]. The remaining examples were obtained by apply-ing few passivity enforcement iterations on some models fromthe first set. Since the passivity metric changes completelythrough passivity enforcement iterations, these latter modelscan be considered as independent. Moreover, since passivityenforcement iteratively removes passivity violations, modelsthat are processed by few enforcement iterations are charac-terized by smaller violations, which are therefore potentiallymore difficult to be detected.A summary of the results is shown in Table IV, wherethe passivity characterization offered by proposed scheme iscompared to a state-of-the-art Hamiltonian-based check basedon (5) or (6), see Sec. II-A3. The various models are classifiedas • True Positive ( TP ), if both proposed adaptive samplingand the Hamiltonian based passivity check were inagreement, producing the same outcome (passive or notpassive); • False Positive ( FP ), if the adaptive sampling approachidentified a passive model while the Hamiltonian checkdid not; • False Negative ( FN ), if the Hamiltonian check classifieda model as passive while the adaptive sampling did not; • the last column Passive but FN indicates models that are
TABLE IVS
UMMARY OF PASSIVITY CHECK RESULTS ON ALL BENCHMARK CASES .M ODELS ARE CLASSIFIED AS : T
RUE P OSITIVE ( TP ), F ALSE P OSITIVE ( FP ), F ALSE N EGATIVE ( FN ), SEE TEXT
Fig. 6. Passivity characterization of the unique False Negative case ( θ ( ζ ) afterfrequency warping. really non-passive even if the Hamiltonian check did notfind any passivity violation.We see that the proposed algorithm agreed in the 99.9% ofthe cases with the Hamiltonian check. The proposed algorithmgave only one different result with respect to the Hamiltoniancheck, providing a False Negative result. It turns out that thismodel was actually not passive (with a very tight passivityviolation region, and | σ − | ≈ · − ), but imaginaryHamiltonian eigenvalues were not detected. Figure 6 providesa graphical illustration. We conclude from this systematictesting campaign that proposed scheme may result even moreaccurate than Hamiltonian algebraic tests, which appear to bemore sensitive and prone to ill-conditioning in extreme cases.The time performances of the two passivity check strategiesare presented in the top panel of Fig. 7. In all cases aMATLAB-based single-thread prototypal code was used, usinga full Hamiltonian eigensolver. We see that the Hamiltoniancheck remains the method of choice for small-scale models(number of ports and poles is reported in the bottom panelfor comparison). For small-scale models, both Hamiltonianand proposed check produce their results in a fraction of asecond. When it comes to large-scale models, the situation isdrastically different. Proposed scheme has a runtime that forall investigated cases is below 10 seconds, which can be asmuch as × faster than the Hamiltonian check. B. A large-scale multiport shielding enclosure
We compare here the performance of proposed passivitycharacterization to various implementations of Hamiltonian-based tests available in the state-of-the-art commercial package
TABLE VS
HIELDING ENCLOSURE : EXECUTION TIME IN MINUTES FOR DIFFERENTPASSIVITY CHARACTERIZATION APPROACHES , UNDER BOTH SINGLE - ANDMULTI - THREADED EXECUTION . S
EE TEXT FOR DETAILS .Hamiltonian [41] Proposed soft hard final
IdEM [41]. The structure that is selected for this comparisonis a large-scale model of a shielding enclosure, depicted inFig. 8. The enclosure is part of an energy-selective shield,obtained by loading the P = 20 ×
20 = 400 ports locatedthroughout the opening with nonlinear elements (back-to-backdiode pairs). For an overview of energy-selective shieldingsee [22], in particular [23], [24], [26], [27].The unloaded structure was initially characterized througha Method-of-Moments (MoM) frequency-domain code [42],obtaining a set of P × P scattering responses, each witha total of frequency samples over the frequency band [0 . , MHz. These samples were subjected to VF toobtain a rational model ( ¯ n = 26 poles), leading to a state-spacerealization of size N = 10400 . Few selected port responsesof the model are compared to field solver data in Figure 9,demonstrating good model accuracy.The model was processed by four different passivity char-acterization schemes, namely • Full Hamiltonian check: as implemented in [41], extract-ing all imaginary eigenvalues of the full Hamiltonianmatrix (5); • Sparse Hamiltonian check: as implemented in [41], basedon the Hamiltonian eigensolver exploiting the iterativemultishift Krylov subspace iterations of [17]; • Adaptive Hamiltonian check: as implemented in [41],based on the hybrid sampling-sparse Hamiltonian eigen-solver documented in [18]; • Proposed two-stage adaptive sampling check.The Hamiltonian eigensolvers were available in two executionmodes using T = 1 and T = 8 parallel threads, respectively.Proposed approach was implemented as a serial code, executedin a MATLAB [43] environment both enabling and disablingmultithread capabilities (at low-level for SVD computations).The results are collected in Table V, which confirms a majorspeedup of proposed approach with respect to Hamiltoniancharacterizations. All methods provided consistent passivitycharacterization results in terms of number of violation bands.Figure 10 depicts the singular value trajectories computed byproposed adaptive sampling scheme, highlighting the localmaxima exceeding the passivity threshold. C. A via array
We analyze here a via array in a 8-layer Printed CircuitBoard (PCB) structure. The array consists of × viaswith a 4:1 signal to ground ratio, resulting in total of P = 640 electrical ports (1–320 in the top layer, 321–640 in the bottomlayer). A fast frequency-domain solver [44] was applied toextract the P × P scattering responses up to GHz, defining Fig. 7. Time performances and models complexity for all the investigated 447 test cases.Fig. 8. A perfectly conducting box enclosure designed for energy-selectiveshielding. Size: × × cm. Red dots depict lumped ports distributedon a regular × grid throughout the × cm aperture.Fig. 9. Model vs data comparison of selected responses of the shieldingenclosure. Fig. 10. Singular value plot of the shielding enclosure model. Bottom panel:enlarged view. circular ports between via pad and antipad at top and bottomlayers. These raw samples were in turn used to generatea large-scale macromodel through VF using the IdEM soft-ware [41], resulting in ¯ n = 40 pole-residue terms and a totalnumber of states N = 25600 . TABLE VIV
IA ARRAY : EXECUTION TIME IN MINUTES FOR DIFFERENT PASSIVITYCHARACTERIZATION APPROACHES , UNDER BOTH SINGLE - ANDMULTI - THREADED EXECUTION . S
EE TEXT FOR DETAILS .Hamiltonian [41] Proposed soft hard final
The model was processed by the proposed passivity char-acterization algorithm, as well as to all Hamiltonian checksavailable in IdEM. Timing results are reported in Table VI,whereas Fig. 11 provides a comparison between selectedmodel responses and corresponding field solver samples. Fig-ure 12 depicts the largest singular value samples returnedby proposed adaptive sampling scheme, highlighting all localmaxima exceeding the passivity threshold. Execution in both hard and final mode identified all 11 local maxima, whereasthe fast soft mode found 10 maxima. Also for this example,the execution time provided by proposed passivity checkalgorithm is drastically reduced with respect to Hamiltonianchecks, at least for soft and hard execution modes, with no lossof accuracy in the characterization of passivity violations. FullHamiltonian checks could not be performed due to excessivestorage requirements.
D. Discussion
The above numerical results show that the proposed algo-rithm appears to be the only viable and scalable solution forpassivity characterization of large-sized models. The followingconsiderations apply. • The number of subbands produced by the step-1 adaptivefrequency warping grows with the number ¯ n of modelpoles and can be very large. This is not a limitation,since the independent processing of those subbands lendsitself to a naive data-based parallelization, which is thusexpected to provide even further speedup with respectto the already highly efficient implementation that ishere documented. Code parallelization is left to futureinvestigations. • It may be the case that multiple passivity violation max-ima in the same subband may be missed if the stoppingthresholds are excessively loose or the allocated budgetof evaluations is too small.
Fig. 12. Singular value plot of the via array model. Bottom panel: enlargedview showing that local maxima are correctly identified by proposed algorithmin hard (not shown) and final mode. • In case a very accurate test is needed, asymptotic conver-gence to all local maxima can be guaranteed by releasingbudget constraints and enabling also basket reuse forrefinement, building on the provable asymptotic consis-tency of the NMSO algorithm class [34]. We remarkthat this strategy was never required in our extensivetests, since proposed implementation never identified non-passive models as passive.VII. C
ONCLUSION
This work introduced a two-stage hierarchical sampling-based passivity characterization algorithm for large-scalelumped macromodels. The proposed scheme was tested onabout 450 test cases of increasing complexity and reaching P = 640 ports and more than × states. The resultsshow that the proposed approach outperforms Hamiltoniantests for medium and large-scale systems, with speedup inruntime exceeding × in some cases.Due to the multi-stage hierarchical implementation, ourproposed algorithm was always in agreement with Hamilto-nian checks. The only exception was a single case that was(correctly) detected as non-passive by proposed method, al-though the Hamiltonian check could not identify any passivityviolation. Therefore, we conclude that proposed scheme is atleast as reliable as state-of-the-art approaches, with much morefavorable scalability with model size.Significant research work is still needed for an efficientextraction and handling of large-scale macromodels. Fullycoupled models with many ports and large dynamical ordersare in fact not effective when synthesized as equivalent circuits and used in system-level simulations, due to the very largenumber of model parameters. Ad hoc model representationsbased on sparsification or compression appear to provide abetter solution for speeding up numerical simulations, possiblycombined with specialized solvers that are aware of the modelstructure. We are actively investigating in this direction, theresults will be documented in a forthcoming report.A CKNOWLEDGEMENT
The Authors are grateful to Morten Schierholz, Institut f¨urTheoretische Elektrotechnik, Hamburg University of Technol-ogy, for providing the via array dataset of Section VI-C.This work was supported in part by the German ResearchFoundation (DFG). R
EFERENCES[1] B. Gustavsen and A. Semlyen, “Rational approximation of frequencydomain responses by vector fitting,”
Power Delivery, IEEE Transactionson , vol. 14, no. 3, pp. 1052–1061, jul 1999.[2] S. Grivet-Talocia and B. Gustavsen,
Passive Macromodeling: Theoryand Applications . New York: John Wiley and Sons, 2016 (publishedonline on Dec 7, 2015).[3] P. Triverio, S. Grivet-Talocia, M. S. Nakhla, F. Canavero, and R. Achar,“Stability, causality, and passivity in electrical interconnect models,”
IEEE Trans. Advanced Packaging , vol. 30, no. 4, pp. 795–808, Nov2007.[4] J. C. Willems, “Dissipative dynamical systems part I: General theory,”
Archive for Rational Mechanics and Analysis , vol. 45, no. 5, pp. 321–351, 1972. [Online]. Available: http://dx.doi.org/10.1007/BF00276493[5] M. R. Wohlers,
Lumped and Distributed Passive Networks . Academicpress, 1969.[6] B. D. O. Anderson and S. Vongpanitlerd,
Network analysis and synthe-sis . Prentice-Hall, 1973.[7] C. P. Coelho, J. Phillips, and L. M. Silveira, “A convex programmingapproach for generating guaranteed passive approximations to tabulatedfrequency-data,”
Computer-Aided Design of Integrated Circuits andSystems, IEEE Transactions on , vol. 23, no. 2, pp. 293 – 301, feb. 2004.[8] D. Saraswat, R. Achar, and M. S. Nakhla, “Fast passivity verificationand enforcement via reciprocal systems for interconnects with largeorder macromodels,”
Very Large Scale Integration (VLSI) Systems, IEEETransactions on , vol. 15, no. 1, pp. 48–59, Jan 2007.[9] ——, “Global passivity enforcement algorithm for macromodels ofinterconnect subnetworks characterized by tabulated data,”
Very LargeScale Integration (VLSI) Systems, IEEE Transactions on , vol. 13, no. 7,pp. 819–832, July 2005.[10] C. S. Saunders, J. Hu, C. E. Christoffersen, and M. B. Steer, “Inversesingular value method for enforcing passivity in reduced-order modelsof distributed structures for transient and steady-state simulation,”
Mi-crowave Theory and Techniques, IEEE Transactions on , vol. 59, no. 4,pp. 837–847, April 2011.[11] T. Brull and C. Schroder, “Dissipativity enforcement via perturbation ofpara-Hermitian pencils,”
Circuits and Systems I: Regular Papers, IEEETransactions on , vol. 60, no. 1, pp. 164–177, Jan 2013.[12] S. Gao, Y.-S. Li, and M.-S. Zhang, “An efficient algebraic methodfor the passivity enforcement of macromodels,”
Microwave Theory andTechniques, IEEE Transactions on , vol. 58, no. 7, pp. 1830–1839, July2010.[13] T. D’haene and R. Pintelon, “Passivity enforcement of transfer func-tions,”
Instrumentation and Measurement, IEEE Transactions on ,vol. 57, no. 10, pp. 2181–2187, Oct 2008.[14] B. Porkar, M. Vakilian, R. Iravani, and S. M. Shahrtash, “Passivity en-forcement using an infeasible-interior-point primal-dual method,”
PowerSystems, IEEE Transactions on , vol. 23, no. 3, pp. 966–974, Aug 2008.[15] T. Wang and Z. Ye, “Robust passive macro-model generation with localcompensation,”
Microwave Theory and Techniques, IEEE Transactionson , vol. 60, no. 8, pp. 2313–2328, Aug 2012.[16] S. Grivet-Talocia, “Passivity enforcement via perturbation of Hamitonianmatrices,”
IEEE Trans. Circuits and Systems I: Fundamental Theory andApplications , vol. 51, no. 9, pp. 1755–1769, September 2004. [17] S. Grivet-Talocia and A. Ubolli, “On the generation of large passivemacromodels for complex interconnect structures,”
IEEE Trans. Ad-vanced Packaging , vol. 29, no. 1, pp. 39–54, February 2006.[18] S. Grivet-Talocia, “An adaptive sampling technique for passivity char-acterization and enforcement of large interconnect macromodels,”
IEEETrans. Advanced Packaging , vol. 30, no. 2, pp. 226–237, May 2007.[19] S. Grivet-Talocia and A. Ubolli, “A comparative study of passivityenforcement schemes for linear lumped macromodels,”
IEEE Trans.Advanced Packaging , vol. 31, no. 4, pp. 673–683, Nov 2008.[20] S. Grivet-Talocia, “On driving non-passive macromodels to instability,”
International Journal of Circuit Theory and Applications , vol. 37, no. 8,pp. 863–886, Oct 2009.[21] “Heterogeneous Integration Roadmap,2019 Edition.” [Online]. Available:https://eps.ieee.org/technology/heterogeneous-integration-roadmap/2019-edition.html[22] G. V. Eleftheriades, “Protecting the weak from the strong,”
Nature ,vol. 505, no. 7484, pp. 490–491, 2014. [Online]. Available:https://doi.org/10.1038/nature12852[23] T. Wendt, C. Yang, H. D. Br¨uns, S. Grivet-Talocia, and C. Schuster, “Amacromodeling-based hybrid method for the computation of transientelectromagnetic fields scattered by nonlinearly loaded metal structures,”
IEEE Transactions on Electromagnetic Compatibility , vol. 62, no. 4, pp.1098–1110, Aug 2020.[24] C. Yang, P. Liu, and X. Huang, “A novel method of energy selectivesurface for adaptive HPM/EMP protection,”
IEEE Antennas Wirel.Propag. Lett. , vol. 12, pp. 112–115, 2013.[25] T. Wendt, C. Yang, S. Grivet-Talocia, and C. Schuster, “Numericalcomplexity study of solving hybrid multiport field-circuit problemsfor diode grids,” in
International Conference on Electromagnetics inAdvanced Applications (ICEAA 2019) , September 2019.[26] C. Yang, H.-D. Br¨uns, P. Liu, and C. Schuster, “Impulse responseoptimization of band-limited frequency data for hybrid field-circuitsimulation of large-scale energy-selective diode grids,”
IEEE Trans.Electromagn. Compat. , vol. PP, no. 99, pp. 1–9, May 2016.[27] C. Yang, P. Liu, H.-D. Br¨uns, and C. Schuster, “Design aspects for HIRFprotection of a rectangular metallic cavity using energy selective diodegrids,” in , vol. 1. IEEE, 2016, pp. 35–37.[28] D. Deschrijver, M. Mrozowski, T. Dhaene, and D. De Zutter, “Macro-modeling of multiport systems using a fast implementation of the vectorfitting method,”
Microwave and Wireless Components Letters, IEEE ,vol. 18, no. 6, pp. 383–385, june 2008.[29] S. B. Olivadese and S. Grivet-Talocia, “Compressed passive macromod-eling,”
IEEE Transactions on Components, Packaging and Manufactur-ing Technology , vol. 2, no. 8, pp. 1378–1388, August 2012.[30] A. Chinea and S. Grivet-Talocia, “On the parallelization of vectorfitting algorithms,”
IEEE Transactions on Components, Packaging andManufacturing Technology , vol. 1, no. 11, pp. 1761–1773, November2011.[31] S. Ganeshan, N. K. Elumalai, R. Achar, and W. K. Lee, “Gvf: Gpu-based vector fitting for modeling of multiport tabulated data networks,”
IEEE Transactions on Components, Packaging and Manufacturing Tech-nology , vol. 10, no. 8, pp. 1375–1387, 2020.[32] A. Chinea, S. Grivet-Talocia, S. B. Olivadese, andL. Gobbato, “High-performance passive macromodeling algorithmsfor parallel computing platforms,”
IEEE Transactions onComponents, Packaging, and Manufacturing Technology , vol. 3,no. 7, pp. 1188–1203, July 2013. [Online]. Available:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6510533[33] S. Boyd, V. Balakrishnan, and P. Kabamba, “A bisection method forcomputing the H ∞ norm of a transfer matrix and related problems,” Mathematics of Control, Signals and Systems , vol. 2, no. 3, pp. 207–219, 1989.[34] A. Al-Dujaili and S. Suresh, “A Naive multi-scale searchalgorithm for global optimization problems,”
Information Sciences ,vol. 372, pp. 294–312, dec 2016. [Online]. Available:https://linkinghub.elsevier.com/retrieve/pii/S0020025516305370[35] B. Gustavsen, “Computer code for rational approximation of frequencydependent admittance matrices,”
Power Delivery, IEEE Transactions on ,vol. 17, no. 4, pp. 1093–1098, oct 2002.[36] E. G. Gilbert, “Controllability and observability in multivariable controlsystems,”
Journal of the Society for Industrial & Applied Mathematics,Series A: Control , vol. 1, no. 2, pp. 128–151, 1963.[37] C. Scherer and S. Weiland, “Linear matrix inequalities in control,”
Lecture Notes, Dutch Institute for Systems and Control, Delft, TheNetherlands , 2000. [38] L. Vandenberghe, V. Balakrishnan, R. Wallin, A. Hansson, and T. Roh, Interior-Point Algorithms for Semidefinite Programming Problems De-rived from the KYP Lemma . Springer, Berlin, Heidelberg, 05 2005,vol. 312, pp. 579–579.[39] H. Chen and J. Fang, “Enforcing bounded realness of S parameterthrough trace parameterization,” in
Electrical Performance of ElectronicPackaging, 2003 , Oct 2003, pp. 291–294.[40] Z. Ye, L. M. Silveira, and J. R. Phillips, “Extended Hamiltonian pencilfor passivity assessment and enforcement for S-parameter systems,” in
Design, Automation Test in Europe Conference Exhibition (DATE), 2010
MATLAB User’s Guide , The Mathworks, Inc., .[44] S. M¨uller, F. Happ, X. Duan, R. Rimolo-Donadio, H. Br¨uns, andC. Schuster, “Complete modeling of large via constellations in multilayerprinted circuit boards,”