Probabilistic Analysis of Loss in Interface Adapter Chaining
aa r X i v : . [ c s . S E ] A p r Probabilistic Analysis of Loss in InterfaceAdapter Chaining
Yoo Chung Dongman LeeOctober 8, 2018
Abstract
Interface adapters allow applications written for one interface to bereused with another interface without having to rewrite application code,and chaining interface adapters can significantly reduce the developmenteffort required to create the adapters. However, interface adapters willoften be unable to convert interfaces perfectly, so there must be a wayto analyze the loss from interface adapter chains in order to improvethe quality of interface adaptation. This paper describes a probabilis-tic approach to analyzing loss in interface adapter chains, which not onlymodels whether a method can be adapted but also how well methods canbe adapted. We also show that probabilistic optimal adapter chaining isan NP-complete problem, so we describe a greedy algorithm which canconstruct an optimal interface adapter chain with exponential time in theworst case.
Network services are being developed all the time, along with the interfacesthat specify how these services should be accessed. Only a very small numberof the interfaces to these services are standardized, and many interfaces can bedeveloped for different services which have very similar functionality. In orderto access a different interface than a client was written for without rewriting theclient, interface adapters could be used to convert invocations in one interface toanother, which can also be chained to reduce the number of interface adaptersthat must be created [6, 16, 7, 14, 11].However, it is unlikely that interface adaptation can be done perfectly, sinceinterfaces are usually developed independently of each other with no regard forcompatibility. Adaptation loss will usually result as certain methods cannot beadapted by the interface adapter, and the problem is only worse when adaptersare chained. Even analyzing how much loss results from an interface adapterchain is not a trivial problem that can be modeled as a shortest path problem.Our previous work [11, 2] took the approach of assuming that a method ina target interface could be implemented as long as all the prerequisite methods1n the source interface were available. However, a discrete approach such asthis ignores the possibility of partial adaptation of methods, where an adaptedmethod may not be able to be invoked with all possible arguments because oflimitations with methods in a source interface. For a trivial example, nega-tive numbers for a square root function cannot be handled if either the sourceinterface or target interface are unaware of imaginary numbers.We describe a probabilistic approach to handling the partial adaptation ofmethods, where the loss may occur not just due to missing functionality or meth-ods, but also due to an interface adapter being unable to handle all argumentsgiven for a method in a target interface. We first investigate how probabili-ties should be expressed, where independence assumptions are made so that wecan obtain a computational model that can be feasibly used in a real system.Based on this probabilistic model, we define how to express probabilistic lossin interface adaptation and how to model interface adapters, which can thenbe used to probabilistically analyze loss in interface adapter chains. As in thediscrete approach [2], probabilistic optimal adapter chaining is NP-complete, sowe describe a greedy algorithm which can construct an optimal adapter chainwith exponential run-time in the worst case.This paper is structured as follows. In section 3, we describe elements fromthe discrete approach which we use in developing the probabilistic approach. Insection 4, we formulate the probabilistic approach for analyzing loss in interfaceadapter chains. Section 5 shows that probabilistic optimal adapter chainingis NP-complete, and section 6 describes an algorithm which can construct anoptimal adapter chain with exponential run-time in the worst case. We discussrelated work in section 2, and section 7 concludes.
Ponnekanti and Fox [15] suggests using interface adapter chaining for networkservices to handle the different interfaces available for similar types of services.They provide a way to query all services whose interfaces can be adapted to aknown interface. They also support lossy adapters, but the support is limited todetecting whether a particular method and specific parameters can be handledat runtime. They do not provide a way to analyze the loss of an interface adapterchain, so they are unable to choose a chain with less loss when alternatives areavailable.Gschwind [7] allows components to be accessed through a foreign interfaceand implements an interface adaptation system for Enterprise JavaBeans [13].It implements a centralized adapter repository that stores adapters, along withweights that mark the priority of an adapter. Dijkstra’s algorithm [5] is usedto construct the shortest interface adapter chain that adapts a source interfaceinto a target interface. While there is support for marking an adapter as lossyor not, it does not have the capability to properly analyze and compare the lossin interface adapter chains.Vayssi´ere [16] supports the interface adaptation of proxy objects for Jini [1].2he goal is to enable clients to use services even when they have different in-terfaces than expected. It provides an adapter service which hooks into thelookup service, so that a client can use a proxy object without having to beaware that any adaptation occurs. No consideration is spent on the possibilitythat interface adapters may not be perfect.There is also other work using chained interface adapters which focus onmaintaining backward compatibility as interfaces evolve [10, 8, 9]. Since theseare applied to different versions of the same interface, they do not consider thepossibility of adaptation loss, in contrast to other work where the focus is onadaptation between different interfaces with potentially irreconcilable incom-patibilities.
In this section, we describe the bare essentials from a discrete approach ofanalyzing lossy interface adapter chaining [2], which are necessary for the prob-abilistic approach developed in section 4. In this section as well as in the rest ofthe paper, a range convention for the index notation used to express matrixesand vectors will also be in effect [4].We take the view that an interface is a specification of a collection of methods(which can also be called operations, methods, member functions, etc.) whichspecify the concrete syntax and types for invoking actions on a service (whichcan also be called an object, module, component, etc.) that conforms to theinterface.An interface adapter transforms calls for one interface into calls for another.For example, if one interface has a method setAudioProperties while anotherinterface has methods setVolume and setBalance , an interface adapter couldhandle a call to setAudioProperty with the former interface using calls to set-Volume and setBalance when the actual service conforms to the latter interface.Adapting interfaces using a chain of interface adapters means convertingcalls to an interface to another using one interface adapter, then convertingthem again to yet another interface with a subsequent interface adapter, and soon until we can convert calls for a desired source interface to calls for a desiredtarget interface.A method dependency matrix is used to express the methods in a sourceinterface necessary for providing methods in a target interface:
Definition 1. A method dependency matrix a ji is a boolean matrix where: • a is true, while a i is set to false for all i = 1 . • If method j can always be implemented in the target interface, set a ji tofalse for all i . • If method j can never be implemented given the source interface, set a j to true, while a ji is set to false for all i = 1 . If method j depends on the availability of actual methods in the sourceinterface, then a j is false, while a ji is true if and only if method j in thetarget interface can be implemented only if method i in the source interfaceis available. Method dependency matrixes can be composed, which in effect models twointerface adapters chained together as a single equivalent adapter in terms ofloss:
Definition 2.
Given method dependency matrixes b kj and a ji , the compositionoperator ⊗ of two method dependency matrixes is defined as: b kj ⊗ a ji = _ j ( b kj ∧ a ji ) (1) Theorem 1.
The composition operator for method dependency matrixes is as-sociative: c lk ⊗ ( b kj ⊗ a ji ) = ( c lk ⊗ b kj ) ⊗ a ji We can also define an interface adapter graph, which is a directed graphwhere interfaces are nodes and adapters are edges. If there are interfaces I and I with an adapter A that adapts source interface I to target interface I , then I and I would be nodes in the interface adapter graph while A would be adirected edge from I to I . Definition 3. An interface adapter graph is a directed graph where interfacesare nodes and adapters are edges. The source node for an edge corresponds tothe source interface, while the target node for an edge corresponds to the targetinterface. When an interface adapter translates a call to a method in the target interfaceto calls to a method in the source interface, it is possible that the translationcannot be done perfectly. If the source interface lacks certain capabilities, thenthe adapter may not be able to properly process specific parameters received bya method.For example, there may be multiple video playback interfaces with adaptersbetween them as in figure 1, where each interface is only able to handle aspecific set of video formats. For instance, if client code is written to accessinterface
Video2 but the actual service has interface
Video1 , then parametersto the playback method for
Video2 with formats
MOV and
RMV cannot be handledproperly. In another situation where client code is written for
Video3 but theactual service conforms to
Video1 , we would need an interface adapter chainfrom
Video1 to Video3 , and we would like to know if the chain that goes through
Video2 or the one that goes through
Video4 is better. We ignore the possibility that video conversion could be done by the adapter itself. OV OGM MKVMPG RMV MP4AVI OGM MKVMPG ASF MP4AVI OGM MKVMPG ASF MP4 AVI OGM MKVMPG ASF RMV
Video1 Video2 Video3Video4
Figure 1: Adapting playback methods in video playback interfaces which sup-port different video formats.With the discrete approach, which only looks at whether methods are avail-able or not, we must make a choice about what to do with methods that canbe only partially adapted. Conservatively treating such methods as being un-available excludes the use of interface adapter chains that can do an imperfectbut mostly complete job of adapting such methods. On the other hand, opti-mistically treating such methods as being available could result in the selectionof an interface adapter chain that is much worse than other chains in termsof how complete the adaptation is. In figure 1,
Video3 could not be adaptedfrom
Video1 at all with the conservative treatment, while with the optimistictreatment we would not be able to determine that the chain which goes through
Video4 , which can support
AVI , OGM , MKV , MPG , and
ASF , is superior than theone that goes through
Video2 , which can only support
OGM , MKV , and
MPG .The probabilistic approach we introduce here takes into account that meth-ods can be partially adapted, relaxing the binary limitation of only treating amethod as available or not.
We develop a probabilistic approach by starting off with the most general form ofexpressing the probabilities and adding assumptions until we have a probabilisticformula that is practical. Without additional assumptions, the probabilitiescan only be expressed in a way that is useless for analyzing real systems. Theadditional assumptions allow us to express the desired probabilities in a waythat they can be feasibly computed from a set of values that can be measured5 m,I ( a ) Method m of interface I can properly handle argument a . V m,I Method m of interface I can properly handle its argument. C Am → m ′ Interface adapter A can successfully convert an argumentfor method m in the target interface to an argument formethod m ′ in the source interface and convert back theresult. Table 1: Probabilistic events.in practice. We first describe the notation for expressing certain probabilistic events intable 1. These events denote whether a method can handle a given argument, orwhether an interface adapter can convert an argument for a method in a targetinterface to an argument for a method in the source interface and successfullyconvert back the result. We assume that a method only accepts a single argu-ment: this is not a problem since methods with multiple arguments can simplybe modeled as a method accepting a single tuple with multiple components. If amethod does not need an argument, we treat it as receiving a dummy argumentanyway.Let us say that we wish to adapt methods in source interface I S into method j in target interface I T . The most general form for expressing the probability thata method could handle an argument is to sum the probabilities for every possibleargument, where we must consider the probability of the method receiving aspecific argument and then the probability that the method can handle it: P ( V j,I T ) = X a P ( V j,I T ( a )) P ( A = a ) (2)The most general form for expressing the probability requires that we knowthe probability distribution of arguments, which is not feasible except for thesimplest of argument domains. For example, the probability distribution for asimple integer argument may require 2 or 2 probabilities to be expressedfor the typical computer architecture, and even measuring such a probabilitydistribution may not be feasible in the first place. It is also not feasible thatwe already know the probabilities for how a method can handle each and everypossible argument.For this reason, we make the assumption that the probabilities do not de-pend on the specific arguments. Given this assumption, we can now express P ( V j,I T ) in terms of whether an argument can be converted and whether it canbe handled. More specifically, this means that for all methods in the sourceinterface that the interface adapter A requires to implement a method in the There is a more precise approach using abstract interpretation that does not rely onsuch assumptions, but it is much more difficult to set up and requires exponential spacecomplexity [3]. and the method in the source interface can handle the converted argument. Usingthe method dependency matrix a ji for adapter A , P ( V j,I T ) can be expressed as: P ( V j,I T ) = P \ a ji (cid:0) V i,I S ∩ C Aj → i (cid:1) (3)This is still too unwieldy an expression to be practical, since it is unclear howdependencies in the events for different methods in the source interface affect theoverall probability. It would also be unclear how to measure the probabilitiesbeforehand without trying out every possible argument and configuration ofinterface adapter chains, something that is clearly not feasible. Therefore wemake an additional assumption that the events for separate methods in thesource interface are independent.With the additional assumption, P ( V j,I T ) can be expressed as: P ( V j,I T ) = Y a ji P ( V i,I S ∩ C Aj → i ) (4)However, equation (4) is still not appropriate for practical use. The reasonis that it entangles the work done by the interface adapter and whether themethod in the source interface can handle the converted argument. Basically,the probabilities intrinsic to the interface adapter and the source interface areentangled. If the source interface itself is the result of adaptation through aninterface adapter chain, then we have the problem of a configuration-dependentevent being entangled with a configuration-independent event, and there is nosimple way to derive the required probabilities.Thus we make one final additional assumption that the probability an inter-face adapter can successfully convert arguments and results is independent fromthe probability that a method in the source interface can handle an argument.This allows us to express P ( V j,I T ) as: P ( V j,I T ) = Y a ji P ( V i,I S ) P ( C Aj → i ) (5)Equation (5) is finally in a form that can be used practically. The probabilitythat an interface adapter A can successfully convert an argument for method j in the target interface to an argument for method i in the source interface, P ( C Aj → i ), is a value that is intrinsic to an interface adapter. In principle, itcould be measured empirically by exhaustively testing the interface adapterto see which arguments it can accept, although in practice more sophisticatedtesting based on random samples would be used. It might even be possible toobtain the probabilities through analysis of the interface adapter code. Theprobability that method m i in source interface I S can handle an argument, P ( V i,I S ), is also a value that can be obtained, either through analytical orempirical means similar to measuring probabilities from interface adapters if7 S is an interface to an actual service, or through a recursive application ofequation (5) when I S is an adapted interface. We now have the basis for describing a framework similar to the one devel-oped for the discrete chain approach. We define a method availability vectorand a method dependency matrix, but in addition we also define a conversionprobability matrix .As before, the method availability vector p i expresses how well a methodis supported in an interface, and it is not intrinsic to an interface but ratherrepresents the loss from interface adaptation. The components for a methodavailability vector in the probabilistic approach are probabilities. p i is definedas the probability that method i can handle an argument it receives, i.e. p i = P ( V i,I ).The method dependency matrix is the same as defined in section 3 and is usedin equation (5). Unlike for the discrete chain approach, however, the methoddependency matrix does not suffice to describe the relevant information for aninterface adapter. We also require a set of probabilities P ( C Aj → i ) for how well aninterface adapter converts an argument for a method in the target interface tothat for the relevant method in the source interface. The conversion probabilitymatrix t ji is defined in terms of these probabilities, where t ji = P ( C Aj → i ).Given method availability vector p i , method dependency matrix a ji , andconversion probability matrix t ji , we can now define the adaptation operator ⊗ .Instead of just the method dependency matrix being applied to the methodavailability vector, the conversion probability matrix must also be applied inconjunction with the method dependency matrix: Definition 4.
Given method dependency matrix a ji , conversion probability ma-trix t ji , and method availability vector p i , the probabilistic adaptation operator ⊗ is defined as: ( a ji , t ji ) ⊗ p i = Y a ji t ji p i (6) Definition 5.
A tuple ( a ji , t ji ) of a method dependency matrix and a conversionprobability matrix is called a probabilistic adaptation factor . The probabilisticadaptation factor for an interface adapter A is denoted as depend ( A ) . It should be emphasized that equation (6) is only rigorously correct giventhe following three assumptions. However, the three assumptions make it pos-sible to feasibly compute P ( V i,I ) from values that can be feasibly measured orestimated a priori in a rigorously sound manner, instead of having to definean ad hoc computational framework where definitions are vague in their opera-tional meaning. While it is not hard to see that the assumptions would not holdfor most real systems, it is an open question how closely the probabilistic ap-proach based on these assumptions approximates actual losses due to interfaceadaptation. 8 The probabilities do not depend on the specific arguments. • The events for separate methods in the source interface are independent. • The probability that an interface adapter can successfully convert argu-ments and results is independent from the probability that a method inthe source interface can handle an argument.It should be noted that equation (6) is incomplete in that it is ambiguouswhat the result should be when no a ji is true. If this is the case, it could bethat the method in the target interface can always be implemented regardlessof availability of methods in the source interface, or it could be that the methodcannot be implemented no matter what.The workaround is simple: a dummy method is defined for each interface,where the method dependency matrixes follow the same rules. For the conver-sion probability matrix, setting t j to zero for all j would yield the expected re-sults, given the usual convention that an empty product has a value of one [12]. We will denote a method availability vector for interface I in which all methodsare available and can handle all arguments by ′ I , where all components havevalue one except for the component corresponding to the dummy method, whichhas value zero. We would like to be able to derive a composite probabilistic adaptation factorfrom the composition of two probabilistic adaptation factors, which would beequivalent to describing the chaining of two interface adapters as if they were asingle interface adapter.Given interfaces I , I , and I , let the corresponding method availabilityvectors be p i , q j , and r k . In addition, let there be interface adapters A and A ,where A converts I to I and A converts I to I , with corresponding prob-abilistic adaptation factors ( a ji , t ji ) and ( b kj , u kj ), respectively. We would liketo know how to derive the probabilistic adaptation factor ( c ki , v ki ) that wouldcorrespond to an interface adapter equivalent to A and A chained together. c ki is obviously derived in the same way as specified by the compositionoperator in section 3. As for v ki , from equation (5) and our assumptions: r k = Y b kj u kj q j = Y b kj u kj Y a ji t ji p i = Y b kj Y a ji u kj t ji p i The values for t i do not matter except for i = 1, so they can be arbitrarily set to zero. Y b kj ∧ a ji u kj t ji p i (7)We want the above to be equivalent to the following: r k = Y c ki v ki p i = Y W j ( b kj ∧ a ji ) v ki p i (8)The composition operator is derived by carefully considering the terms inequations (7) and (8), based on collecting the terms for fixed i .If we collect the terms in equation (7) with fixed i , we have (9). It should beemphasized that (9) is not identical to (7): the former is a product over varying j with both i and k fixed, while the latter is a product over varying i and j withonly k fixed. Also note that if b kj ∧ a ji are all false for varying j , then no termsaffect the result of (7). This would be equivalent to (9) having a value of one,which is expected from an empty product. Y b kj ∧ a ji u kj t ji p i (9)On the other hand, consider the term in equation (8) with fixed i . If W j ( b kj ∧ a ji ) is false, i.e. b kj ∧ a ji are all false for varying j , then the term is excludedfrom the product and is equivalent to multiplying by one, instead. If it is true,on the other hand, then v ki p i is the term that corresponds to the fixed i . So ifwe set v ki p i according to (10), then equations (8) and (7) end up having theexact same values. v ki = Y b kj ∧ a ji u kj t ji (10)From this, we can conclude that the composition operator ⊗ for two proba-bilistic adaptation factors should be defined as in definition 6: Definition 6.
Given probabilistic adaptation factors ( b kj , u kj ) and ( a ji , t ji ) , the probabilistic composition operator ⊗ is defined as: ( b kj , u kj ) ⊗ ( a ji , t ji ) = ( b kj ⊗ a kj , Y b kj ∧ a ji u kj t ji ) (11)The ⊗ operator is “associative” when applied to a probabilistic adaptationfactors and a method availability vector: Remember that only k is fixed in (7) and (8), but both k and i are fixed in (10). It is technically not associative in this context since the ⊗ operator in ( b kj , u kj ) ⊗ ( a ji , t ji )is not the same as the ⊗ operator in ( a ji , t ji ) ⊗ p i . heorem 2. Applying the adaptation operator twice to a method availabiliyvector is the same as applying the composition operator and then applying theadaptation operator: ( b kj , u kj ) ⊗ (( a ji , t ji ) ⊗ p i ) = (( b kj , u kj ) ⊗ ( a ji , t ji )) ⊗ p i Proof. ( b kj , u kj ) ⊗ (( a ji , t ji ) ⊗ p i ) = ( b kj , u kj ) ⊗ Y a ji t ji p i = Y b kj u kj Y a ji t ji p i = Y b kj Y a ji u kj t ji p i = Y b kj ∧ a ji u kj t ji p i = Y W j ( b kj ∧ a ji ) Y b kj ∧ a ji u kj t ji p i = Y b kj ⊗ a ji Y b kj ∧ a ji u kj t ji p i = ( b kj ⊗ a ji , Y b kj ∧ a ji u kj t ji ) ⊗ p i = (( b kj , u kj ) ⊗ ( a ji , t ji )) ⊗ p i Likewise, probabilistic adaptation factor composition is associative:
Theorem 3.
The composition operator for probabilistic adaptation factors isassociative: ( c lk , v lk ) ⊗ (( b kj , u kj ) ⊗ ( a ji , t ji )) = (( c lk , v lk ) ⊗ ( b kj , u kj )) ⊗ ( a ji , t ji ) Proof.
Using the fact that b kj ⊗ a ji = W j ( b kj ∧ a ji ) must be true if b kj ∧ a ji istrue, we have:( c lk , v lk ) ⊗ (( b kj , u kj ) ⊗ ( a ji , t ji ))= ( c lk , v lk ) ⊗ ( b kj ⊗ a kj , Y b kj ∧ a ji u kj t ji )= ( c lk ⊗ b kj ⊗ a kj , Y c lk ∧ ( b kj ⊗ a kj ) v lk Y b kj ∧ a ji u kj t ji )= ( c lk ⊗ b kj ⊗ a kj , Y c lk ∧ b kj ∧ a ji ∧ ( b kj ⊗ a kj ) v lk u kj t ji )11 ( c lk ⊗ b kj ⊗ a kj , Y c lk ∧ b kj ∧ a ji v lk u kj t ji )= ( c lk ⊗ b kj ⊗ a ji , Y ( c lk ⊗ b kj ) ∧ c lk ∧ b kj ∧ a ji v lk u kj t ji = ( c lk ⊗ b kj ⊗ a ji , Y ( c lk ⊗ b kj ) ∧ a ji Y c lk ∧ b kj v lk u kj t ji = ( c lk ⊗ b kj , Y c lk ∧ b kj v lk u kj ) ⊗ ( a ji , t ji )= (( c lk , v lk ) ⊗ ( b kj , u kj )) ⊗ ( a ji , t ji )However, probabilistic adaptation factor composition is not commutative,as can be easily seen by considering the composition of probabilistic adaptationfactors whose components are not square matrixes.We can also show a monotonicity property, which formalizes the notion thatextending an interface adapter chain results in worse adaptation loss: Theorem 4. If A and A are interface adapters, where A converts I to I and A converts I to I , with ( a ji , t ji ) = depend ( A ) and ( b kj , u kj ) = depend ( A ) where they follow the rules for the dummy method in sections 3,let p k = ( b kj , u kj ) ⊗ ′ I and p ′ k = ( b kj , u kj ) ⊗ ( a ji , t ji ) ⊗ ′ I . Then p ′ k ≤ p k Proof.
From our assumptions, we have: p = p ′ = 0 p k = Y j =1 ∧ b kj u kj (12) p ′ k = Y i =1 ∧ b kj ∧ a ji u kj t ji = Y b kj Y i =1 ∧ a ji u kj t ji = Y b kj u kj Y i =1 ∧ a ji t ji (13)If method k can never be implemented given the source interface, then b k will be true, and given that u k will be zero, p ′ k will also have to be zero. Other-wise, b k will be false, so we can do a term by term comparison of equations (12)and (13), taking advantage of the fact that u kj and t ji are probabilities so thatthey are greater than or equal to zero and lesser than or equal to one:0 ≤ Y i =1 ∧ a ji t ji ≤ lay:MOV OGM MKVMPG RMV MP4selectVideo:AVI OGM MKVMPG ASF MP4startPlayback:playFile:AVI OGM MKVMPG ASF MP4 playVideo:AVI OGM MKVMPG ASF RMV Video1 Video2 Video3Video4 A A A A Figure 2: Adapting playback methods in video playback interfaces which sup-port different video formats, expanded version of figure 1. u kj Y i =1 ∧ a ji t ji ≤ u kj ∴ p ′ k ≤ p k (14)The definitions of the method dependency matrix and the method availabil-ity vector in section 4.1, along with the associativity rules proven in this section,provide a succinct way to mathematically express and analyze the chaining oflossy interface adapters using a probabilistic approach. As an example, we apply the probabilistic approach to analyzing lossy inter-face chaining to the interface adapter graph of figure 2, which is a slightly ex-panded version of figure 1. Instead of simply four interfaces each having a singleplayback method, one of the interfaces,
Video4 , consists of two methods: the selectVideo method chooses a video file that should be played back, and the startPlayback method actually begins video playback. As with the examplein figure 1, each interface can handle different video formats.In this hypothetical scenario, there is an application written for interface
Video3 which needs to use a video service that actually conforms to
Video1 .An interface adapter chain from
Video1 to Video3 would be required if theapplication is to use the video service. Since there are two possible interface13dapter chains, one which goes through
Video2 and another which goes through
Video4 , we would want to use the chain that can support more video formats.The interface adapter from
Video1 to Video2 will be denoted A , the onefrom Video2 to Video3 will be denoted A , the one from Video1 to Video4 willbe denoted A , and the one from Video4 to Video3 will be denoted A . Themethod dependency matrix and conversion probability matrix for adapter A k will be denoted a kji and t kji , respectively. For each interface adapter, we assumethat all methods in the target interface can be implemented in terms of allthe methods in the source interface. For simplicity, we will not define dummymethods for any of the interfaces.Since the single method play of Video2 depends only on the single method playFile of Video1 for A , a ji only has a single true component. The same istrue for a ji . On the other hand, the selectVideo and startPlayback methodsof Video4 both depend on the single method playFile of Video1 for A , so a ji has two rows corresponding to the methods in the target interface, each witha single true component corresponding to the method in the source interface.The playVideo method of Video3 depends on both methods of
Video4 , so a ji has a single row with two true components. The method dependency matrixesfor each interface adapter are shown below: a ji = (cid:0) t (cid:1) a ji = (cid:0) t (cid:1) a ji = (cid:18) tt (cid:19) a ji = (cid:0) t t (cid:1) As for the conversion probability matrixes, a way to estimate the necessaryprobabilities is to compare the number of video formats each interface supports. For A , among the formats MOV , OGM , MKV , MPG , RMV , and
MP4 that
Video2 shouldbe able to support, the adapted interface can only support
OGM , MKV , MPG , MP4 since these are supported by the source interface
Video1 , so the conversionprobability can be estimated as . Assuming that startPlayback in Video4 has no arguments to be converted, the conversion probability matrixes can beset as in the following: t ji = (cid:0) (cid:1) t ji = (cid:0) (cid:1) t ji = (cid:18) (cid:19) t ji = (cid:0) (cid:1) We will first look at the interface adapter chain that starts from
Video1 ,passes through
Video2 , and ends at
Video3 . Given a service conforming to
Video1 that is fully functional, i.e. supports all arguments it could receive, thesole component of the method availability vector corresponding to
Video1 is aprobability of one. To see how the interface adapter chain formed from A and A adapts Video1 to Video3 , i.e. the result of applying A to Video1 and thenapplying A , we can use the adaptation operator:( a kj , t kj ) ⊗ ( a kj , t ji ) ⊗ (cid:0) (cid:1) = (cid:0) (cid:1) While this will not be accurate, it would be a relatively easy way to obtain a roughestimate that could be used for comparing the quality of different interface adapter chains.
14e can also do the same thing for the interface adapter chain that starts from
Video1 , passes through
Video4 , and ends at
Video3 , i.e. the interface adapterchain formed from A and A :( a kj , t kj ) ⊗ ( a kj , t ji ) ⊗ (cid:0) (cid:1) = (cid:0) (cid:1) These results roughly estimate that when providing
Video3 by adapting
Video1 , the chain formed from A and A would allow the interface to handleabout of the video files it is asked to play back, while the chain formed from A and A would allow the interface to handle about of the video files it isasked to play back. This is consistent with how the former chain is worse interms of only being able to handle OGM , MKV , and
MPG , while the latter chain canhandle significantly more formats, specifically
AVI , OGM , MKV , MPG , and
ASF . Incontrast, the discrete approach would tell us that the two chains are exactly thesame.By using probability estimates of how well each interface adapter can adapta source interface to a target interface, the probabilistic analysis scheme forinterface adapter chaining outlined in this paper can be used to compare thequality of an interface adapter chains where methods may not be adapted per-fectly, in contrast to the discrete approach where methods are assumed to beadapted perfectly if they can be adapted at all.
Like the optimal adapter chaining problem with the discrete chain approach,the optimal adapter chaining problem with the probabilistic approach is NP-complete as well. This is intuitively the case since the probabilistic approachshould be able to encompass the discrete approach, and we show this formallyin this section.We first formally define the optimal adapter chaining problem in the prob-abilistic approach, which we will call PROB-CHAIN. Let us have an interfaceadapter graph ( { I i } , { A i } ), where { I i } is the set of interfaces and { A i } is theset of interface adapters. Let f k be the probabilistic adaptation factor associ-ated with adapter A k . Let S ∈ { I i } be the source interface and T ∈ { I i } bethe target interface. Let { r m } be the relative invocation probabilities for themethods in the target interface such that P m r m = 1. Then the problem iswhether there is an interface adapter chain [ A P (1) , A P (2) , . . . , A P ( n ) ] such thatthe source of A P (1) is S , the target of A P ( n ) is T , and P m r m v Tm is at least aslarge as some probability X , where v T = f P ( n ) ⊗ · · · ⊗ f P (2) ⊗ f P (1) ⊗ ′ S .Informally, this is an optimization problem which tries to maximize theprobability that an argument can be handled by a method in a fixed targetinterface, obtained by applying an interface adapter chain on a fully-functionalservice which conforms to the source interface. { r m } would express how oftenmethods are invoked relative to each other. While the example here is simple enough that we can easily figure out exactly what typesof arguments can be handled, it can be prohibitively difficult to do so in the general case [3]. heorem 5. There is a reduction from the discrete approach to the probabilisticapproach for analyzing loss in interface adapter chains.Proof.
Let there be a method availability vector p i and a method dependencymatrix a ji as expressed in the discrete approach. We construct correspondingmethod availability vector p ′ i , method dependency matrix a ′ ji , and conversionprobability matrix t ′ ji as expressed in the probabilistic approach as follows. If p i is true, then set p ′ i to one, else set p ′ i to zero. a ′ ji is just the same as a ji . Andset all t ′ ji to one. Then we have: a ji ⊗ p i = ^ j ( a ji → p i ) = ^ a ji p i ( a ′ ji , t ′ ji ) ⊗ p ′ i = Y a ′ ji t ′ ji p ′ i = Y a ji p ′ i and it is easy to see that a component of a ji ⊗ p i is true if and only if thecorresponding component of ( a ′ ji , t ′ ji ) ⊗ p ′ i is one, and that a component of a ji ⊗ p i is false if and only if the corresponding component of ( a ′ ji , t ′ ji ) ⊗ p ′ i is zero.This shows how an interface adapter graph for the discrete approach can beconverted to one for the probabilistic approach in a way that the adaptationoperators in both approaches basically have the same behavior. Since all themathematics for both approaches follow from the definition of the adaptationoperators, we have just shown that the probabilistic approach can encompassthe discrete approach.Next, we formally describe the equivalent problem for the discrete approach,which we will call CHAIN and is NP-complete [2]. Let us have an interfaceadapter graph ( { I i } , { A i } ), where { I i } is the set of interfaces and { A i } is theset of interface adapters. Let a k be the method dependency matrix associatedwith adapter A k . Let S ∈ { I i } be the source interface and T ∈ { I i } be thetarget interface. Then the problem is whether there is an interface adapterchain [ A P (1) , A P (2) , . . . , A P ( m ) ] such that the source of A P (1) is S , the target of A P ( m ) is T , and k v T k = k a P ( m ) ⊗ · · · ⊗ a P (2) ⊗ a P (1) ⊗ ′ S k is at least as largeas some parameter N . Theorem 6.
PROB-CHAIN is NP-complete.Proof.
Given M methods in the target interface, use the method describedabove to convert an input for CHAIN to an input for PROB-CHAIN, wherewe also set all r m to M . Then P m r m v Tm will be nM , where n is the numberof methods available from the interface adapter chain, so PROB-CHAIN with X set to NM will solve CHAIN. Since CHAIN is NP-complete and it is easy toverify if an alternate chain results in smaller P m r m v Tm , PROB-CHAIN mustalso be NP-complete. 16 A greedy algorithm
As shown in section 5, the problem of finding an optimal interface adapter chainmaximizing the probability of an argument being handled by a method in thetarget interface is an NP-complete problem. Short of developing a polynomial-time algorithm for an NP-complete problem, practical systems will have to usea heuristic algorithm or an exponential-time algorithm with reasonable perfor-mance in practice.Algorithm 1 is a greedy algorithm that finds an optimal interface adapterchain between a given source interface and a target interface. Given an interfaceadapter graph G , it works by looking at every possible acyclic adapter chain withan arbitrary source that results in the target interface t in order of increasingloss, taking advantage of equation (14), until we find a chain that starts withthe desired source interface s .In this context, loss means the probability that a method in the target inter-face cannot handle an argument given a fully functional service with the sourceinterface, which is computed in algorithm 2, so the algorithm is guaranteed tofind the optimal interface adapter chain. In the worst case, however, the al-gorithm takes exponential time since there can be an exponential number ofacyclic chains in an interface adapter graph. Algorithm 1
A probabilistic greedy algorithm for interface adapter chaining. procedure
Prob-Greedy-Chain ( G = ( V, E ), s , t , { r m } ) C ← { [] } ⊲ chains to extend M = ∅ ⊲ discarded chains D ← { [] I dim( ′ t ) } ⊲ method dependency matrixes while C = ∅ do c ← element of C minimizing Prob-Loss ( c, D, { r m } ) if c = [] ∧ source ( c [1]) = s thenreturn c else if no acyclic chain not in C ∪ M extends c then C ← C − { c } M ← M ∪ { c } elseif c = [] then B ← { [ e ] | e ∈ E, target ( e ) = t } else B ← { e : c | e ∈ E, target ( e ) = source ( c [1]) } end if remove cyclic chains from BC ← C ∪ BD ← D ∪ { e : c D [ c ] ⊗ depend ( e ) | e : c ∈ B } end ifend whileend procedure lgorithm 2 Computing the probabilistic loss of an interface adapter chain. function
Prob-Loss ( c , D , { r m } ) s ← source ( c [1]) v ← D [ c ] ⊗ ′ s return − P m r m v m end function Algorithm 1 can be easily extended to support behavior similar to servicediscovery by checking whether the current source is among a potential set ofsource interfaces instead of just checking against one, as is done with a similaralgorithm based on the discrete approach [2].
Interface adapters can allow code written to use one interface to use another in-terface, and chaining them together can substantially reduce the effort requiredto create interface adapters. Since interface adapters will often be unable toconvert interfaces perfectly, loss can be incurred during interface adaptation,and we need a rigorous mathematical framework for analyzing such loss. In-stead of just analyzing whether or not a method in a target interface can beprovided, we have developed a probabilistic framework where partial adaptationof methods can also be handled.We developed the probabilistic framework by first constructing a probabilis-tic model for interface adaptation. Based on this, we defined mathematicalobjects and operations which probabilistically express loss in adapted inter-faces and interface adapters, which were then used to prove that probabilisticoptimal adapter chaining is NP-complete and to construct a greedy algorithmwhich can construct an optimal adapter chain with exponential time in the worstcase. These provide a more fine-grained approach to analyzing loss in interfaceadapter chains compared to a discrete approach.Future avenues of research include alternate probabilistic approaches whichrequire weaker and more realistic assumptions that can still be feasibly usedin real interface adaptation systems. Another avenue of research is to findgood ways to derive the necessary probabilities from the interface adapters, ei-ther through empirical means where interface adapters are invoked on manyarguments to measure the probabilities or through analytical means which canapproximate the probabilities based on program structure. Finally, there re-mains the design and implementation of an actual interface adaptation systemwhich takes advantage of the probabilistic approach to analyzing loss in interfaceadapter chaining. 18 eferences [1] Ken Arnold, editor.
The Jini Specifications . Addison-Wesley, 2nd edition,December 2000.[2] Yoo Chung and Dongman Lee. Mathematical basis for the chaining of lossyinterface adapters.
IET Software , 4(1):43–54, February 2010.[3] Yoo Chul Chung.
Formal Analysis Framework for Lossy Interface AdapterChaining . PhD thesis, KAIST, February 2010. Chapter 5.[4] M. Crampin and F. A. E. Pirani.
Applicable Differential Geometry , chap-ter 0, pages 5–7. Number 59 in London Mathematical Society Lecture NoteSeries. Cambridge University Press, March 1987.[5] E. W. Dijkstra. A note on two problems in connexion with graphs.
Nu-merische Mathematik , 1:269–271, June 1959.[6] Erich Gamma, Richard Helm, Ralph Johnson, and John M. Vlissides.
De-sign Patterns: Elements of Reusable Object-Oriented Software . Addison-Wesley, November 1994.[7] Thomas Gschwind. Type based adaptation: An adaptation approach fordynamic distributed systems. In
Proceedings of the Third InternationalWorkshop on Software Engineering and Middleware , volume 2596 of
LectureNotes in Computer Science , pages 130–143, May 2002.[8] Sven Moritz Hallberg. Eternal compatibility in theory.
The Monad.Reader ,2, May 2005. No longer online, available from the Internet ArchiveWayback Machine.[9] Piotr Kaminski, Marin Litoiu, and Hausi M¨uller. A design technique forevolving web services. In
Proceedings of the 2006 Conference of the Centerfor Advanced Studies on Collaborative Research , Toronto, Ontario, Canada,October 2006. ACM Press.[10] Ralph Keller and Urs H¨olzle. Binary component adaptation. In
Proceedingsof the 12th European Conference on Object-Oriented Programming , vol-ume 1445 of
Lecture Notes on Computer Science , pages 307–329. Springer-Verlag, July 1998.[11] Byoungoh Kim, Kyungmin Lee, and Dongman Lee. An adapter chainingscheme for service continuity in ubiquitous environments with adapter eval-uation. In
Proceedings of the Sixth IEEE International Conference on Per-vasive Computing and Communications , pages 537–542. IEEE ComputerSociety Press, March 2008.[12] Serge Lang.
Algebra , volume 211 of
Graduate texts in mathematics , page 9.Springer-Verlag, revised third edition, 2002.1913] Vlada Matena, Sanjeev Krishnan, Linda DeMichiel, and Beth Stearns.
Ap-plying Enterprise JavaBeans: Component-Based Development for the J2EEPlatform . Addison-Wesley, second edition, May 2003.[14] Hamid Reza Motahari Nezhad, Boualem Benatallah, Axel Martens, Fran-cisco Curbera, and Fabio Casati. Semi-automated adaptation of serviceinteractions. In
Proceedings of the 16th International Conference on WorldWide Web , pages 993–1002. ACM Press, May 2007.[15] Shankar R. Ponnekanti and Armando Fox. Application-service interoper-ation without standardized service interfaces. In
Proceedings of the FirstIEEE International Conference on Pervasive Computing and Communica-tions . IEEE Computer Society Press, March 2003.[16] Julien Vayssi`ere. Transparent dissemination of adapters in Jini. In