• No results found

Model reduction of linear multi-agent systems by clustering with H-2 and H_infinity error bounds

N/A
N/A
Protected

Academic year: 2021

Share "Model reduction of linear multi-agent systems by clustering with H-2 and H_infinity error bounds"

Copied!
39
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

University of Groningen

Model reduction of linear multi-agent systems by clustering with H-2 and H_infinity error

bounds

Jongsma, Hidde-Jan; Mlinaric, Petar; Grundel, Sara; Benner, Peter; Trentelman, Harry L.

Published in:

Mathematics of Control, Signals, and Systems (MCSS) DOI:

10.1007/s00498-018-0212-6

IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you wish to cite from it. Please check the document version below.

Document Version

Publisher's PDF, also known as Version of record

Publication date: 2018

Link to publication in University of Groningen/UMCG research database

Citation for published version (APA):

Jongsma, H-J., Mlinaric, P., Grundel, S., Benner, P., & Trentelman, H. L. (2018). Model reduction of linear multi-agent systems by clustering with H-2 and H_infinity error bounds. Mathematics of Control, Signals, and Systems (MCSS), 30(6). https://doi.org/10.1007/s00498-018-0212-6

Copyright

Other than for strictly personal use, it is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license (like Creative Commons).

Take-down policy

If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim.

Downloaded from the University of Groningen/UMCG research database (Pure): http://www.rug.nl/research/portal. For technical reasons the number of authors shown on this cover page is limited to 10 maximum.

(2)

https://doi.org/10.1007/s00498-018-0212-6 O R I G I NA L A RT I C L E

Model reduction of linear multi-agent systems by

clustering with

H

2

and

H

error bounds

Hidde-Jan Jongsma1 · Petar Mlinari´c2 · Sara Grundel2 · Peter Benner2 · Harry L. Trentelman1

Received: 23 June 2017 / Accepted: 17 April 2018 / Published online: 26 April 2018 © The Author(s) 2018

Abstract In the recent paper (Monshizadeh et al. in IEEE Trans Control Netw Syst 1(2):145–154,2014.https://doi.org/10.1109/TCNS.2014.2311883), model reduction of leader–follower agent networks by clustering was studied. For such multi-agent networks, a reduced order network is obtained by partitioning the set of nodes in the graph into disjoint sets, called clusters, and associating with each cluster a single, new, node in a reduced network graph. In Monshizadeh et al. (2014), this method was studied for the special case that the agents have single integrator dynamics. For a special class of graph partitions, called almost equitable partitions, an explicit formula was derived for theH2model reduction error. In the present paper, we will extend

and generalize the results from Monshizadeh et al. (2014) in a number of directions.

This research is supported by a research grant of the “International Max Planck Research School (IMPRS) for Advanced Methods in Process and System Engineering (Magdeburg)”.

B

Petar Mlinari´c mlinaric@mpi-magdeburg.mpg.de Hidde-Jan Jongsma h.jongsma@rug.nl Sara Grundel grundel@mpi-magdeburg.mpg.de Peter Benner benner@mpi-magdeburg.mpg.de Harry L. Trentelman h.l.trentelman@rug.nl

1 Johann Bernoulli Institute for Mathematics and Computer Science, University of Groningen,

Groningen, The Netherlands

2 Max Planck Institute for Dynamics of Complex Technical Systems, Sandtorstr. 1,

(3)

Firstly, we will establish an a priori upper bound for theH2model reduction error in

case that the agent dynamics is an arbitrary multivariable input–state–output system. Secondly, for the single integrator case, we will derive an explicit formula for theH∞ model reduction error. Thirdly, we will prove an a priori upper bound for theH∞ model reduction error in case that the agent dynamics is a symmetric multivariable input–state–output system. Finally, we will consider the problem of obtaining a priori upper bounds if we cluster using arbitrary, possibly non almost equitable, partitions. Keywords Model reduction· Clustering · Multi-agent system · Consensus · Graph partitions

1 Introduction

In the last few decades, the world has become increasingly connected. This has brought a significant interest to complex networks, smart-grids, distributed systems, trans-portation networks, biological networks, and networked multi-agent systems, see, e.g., [2,10,28]. Widely studied topics in networked systems have been the problems of consensus and synchronization, see [19,20,27,30]. Other important subjects in the theory of networked systems are flocking, formation control, sensor placement, and controllability of networks, see, e.g., [8,9,11,12,24,29,34].

Analysis and controller design for large-scale complex networks can become very expensive from a computational point of view, especially for problems where the complexity of the network scales as a power of the number of nodes it contains. In order to tackle this problem, there is a need for methods and procedures to approximate the original networks by smaller, less complex ones.

Direct application of established model reduction techniques, such as balanced truncation, Hankel-norm approximation, and Krylov subspace methods, see, e.g., [1, 3], to the dynamical models of networked systems generally leads to a collapse of the network structure, as well as the loss of important properties such as consensus or synchrony.

Model reduction techniques specifically for networked multi-agent systems with first-order agents have been proposed in [6,15,16,22]. Extensions to second-order agents have been considered in [7,14] and to more general higher-order agents in [4, 17,23,25]. Some of these methods are based on clustering nodes in the network. With clustering, the idea is to partition the set of nodes in the network graph into disjoint sets called clusters, and to associate with each cluster a single, new, node in the reduced network, thus reducing the number of nodes and connections and the complexity of the network topology. For a review on clustering in data mining see, e.g., [18].

In [26], model reduction by clustering was put in the context of model order reduc-tion by Petrov–Galerkin projecreduc-tion. The results in [26] provide explicit expressions for theH2model reduction error if a leader–follower network with single integrator agent

dynamics is clustered using an almost equitable partition of the graph. In the present

paper, our aim is to generalize and extend the results in [26] to networks where the agent dynamics is given by an arbitrary multivariable input–state–output system. We also aim at finding explicit formulas and a priori upper bounds for the model reduction

(4)

error measured in theH-norm. Finally, we will consider the problem of clustering a network according to arbitrary, not necessarily almost equitable, graph partitions. The main contributions of this paper are the following:

1. We derive an a priori upper bound for theH2model reduction error for the case

that the agents are represented by an arbitrary input–state–output system. 2. We extend the results in [26] for single integrator dynamics by giving an explicit

expression for theHmodel reduction error in terms of properties of the given graph partition.

3. We establish an a priori upper bound for theHmodel reduction error for the case that the agents are represented by an arbitrary but symmetric input–state–output system.

4. We establish some preliminary results on the model reduction error in case of clustering using an arbitrary, possibly non almost equitable, partition.

The outline of this paper is as follows. In Sect. 2, we introduce some notation and discuss some elementary facts about computing theH2- andH∞-norm of stable

transfer functions needed later on in this paper. In Sect.3, we formulate our prob-lem of model reduction of leader–follower multi-agent networks. Section4reviews some theory on graph partitions and model reduction by clustering and relates this method to Petrov–Galerkin projection of the original network. Also preservation of synchronization is discussed here. In Sect.5, we provide a priori error bounds on the

H2model reduction error for networks with arbitrary agent dynamics, clustered using

almost equitable partitions. In Sect.6, we complement these results by providing upper bounds on theHmodel reduction error. In Sect.7, the problem of clustering net-works according to general partitions is considered and the first steps toward a priori error bounds on both theH2andH∞model reduction errors are made. Numerical

examples for which we compare the actual errors with the a priori bounds established in this paper are presented in Sect.8. Finally, Sect.9provides some conclusions. To enhance readability, some of the more technical proofs in this paper have been put to “Appendix.”

2 Preliminaries

In this section we briefly introduce some notation and discuss some basic facts on finite-dimensional linear systems. The trace of a square matrix A is denoted by tr(A). The largest singular value of a matrix A is denoted by σ1(A). For given

real numbers α1, α2, . . . , αk, we denote by diag1, α2, . . . , αk) the k × k

diago-nal matrix with theαi’s on the diagonal. For square matrices A1, A2, . . . , Ak, we use

diag(A1, A2, . . . , Ak) to denote the block diagonal matrix with the Ai’s as diagonal

blocks. For a given matrix A, let A+denote its Moore–Penrose pseudoinverse. Consider the input–state–output system

˙x = Ax + Bu,

(5)

with x ∈ Rn, u∈ Rm, y∈ Rp, and transfer function S(s) = C(s I − A)−1B. If S has

all its poles in the open left half complex plane, then itsH2-norm is defined by

S2 H2 := 1 2π  +∞ −∞ tr  S(−iω)TS(iω) dω. If A is Hurwitz, then theH2-norm can be computed as

S2 H2 = tr  BTX B  ,

where X is the unique positive semi-definite solution of the Lyapunov equation

ATX+ X A + CTC = 0. (2) For the purposes of this paper, we also need to deal with the situation when A is not Hurwitz. LetX+(A) denote the unstable subspace of A, i.e., the direct sum of the generalized eigenspaces of A corresponding to its eigenvalues in the closed right half plane. We state the following proposition:

Proposition 1 Assume thatX+(A) ⊂ ker C. Then, the Lyapunov equation (2) has at

least one positive semi-definite solution. Among all positive semi-definite solutions, there is exactly one solution, say X , with the property X+(A) ⊂ ker X. For this particular solution X , we haveS2H

2 = tr



BTX B.

A proof of this result can be found in “Appendix A”.

If S has all its poles in the open left half plane, then itsH-norm is defined by

SH:= sup

ω∈Rσ1(S(iω)).

We will now deal with computing theH∞-norm. The result is a generalization of Lemma 4 in [16]. For a proof, we refer to “Appendix B.”

Lemma 1 Consider the system (1). Assume that its transfer function S has all its

poles in the open left half plane. If there exists X ∈ Rp×p such that X = XT and C A= XC, then SH = σ1(S(0)).

Continuing our effort to compute theH-norm, we now formulate a lemma that will be instrumental in evaluating a transfer function at the origin. Recall that for a given matrix A, its Moore–Penrose inverse is denoted by A+.

Lemma 2 Consider the system (1). If A is symmetric and ker A ⊂ ker C, then 0 is

not a pole of the transfer function S and we have S(0) = −C A+B.

This result is proven in “Appendix C.”

To conclude this section, we briefly review the model reduction technique known as Petrov–Galerkin projection (see also [1]).

(6)

Definition 1 Consider the system (1). Let W, V ∈ Rn×r, with r < n, such that

WTV = I . The matrix V WT is then a projector, called a Petrov–Galerkin projector. The reduced order system

˙ˆx = WTAVˆx + WTBu,

ˆy = CV ˆx,

with ˆx ∈ Rr is called the Petrov–Galerkin projection of the original system (1).

3 Problem formulation

We consider networks of diffusively coupled linear subsystems. These subsystems, called agents, have identical dynamics; however, a selected subset of the agents, called the leaders, also receives an input from outside the network. The remaining agents are called followers. The network consists of N agents, indexed by i , so i ∈ V :=

{1, 2, . . . , N}. The subset VL ⊂ V is the index set of the leaders, more explicitly

VL= {v1, v2, . . . , vm}. The followers are indexed by VF:= V\VL. More specifically,

the leaders are represented by the finite-dimensional linear system

˙xi = Axi+ B N



j=1

ai j(xj− xi) + Eu, i ∈ VL, i = v,

whereas the followers have dynamics

˙xi = Axi+ B N



j=1

ai j(xj− xi), i ∈ VF.

The weights ai j ≥ 0 represent the coupling strengths of the diffusive coupling between

the agents. In this paper, we assume that ai j = aj ifor all i, j ∈ V. Also, aii = 0 for all

i ∈ V. Furthermore, xi ∈ Rnis the state of agent i , and u∈ Rr is the external input

to the leaderv. Finally, A∈ Rn×n, B∈ Rn×n, and E∈ Rn×r are real matrices. It is customary to represent the interaction between the agents by the graphG with node setV = {1, 2, . . . , N} and adjacency matrix A = (ai j). In the setup of this paper,

this graph is undirected, reflecting the assumption thatA is symmetric. The Laplacian

matrix L ∈ RN×N of the graphG is defined as

Li j =  di if i = j, −ai j if i = j, with di = N j=1ai j.

Recall that the set of leader nodes isVL= {v1, v2, . . . , vm}, and define the matrix

M ∈ RN×mas

Mi=



1 if i= v, 0 otherwise.

(7)

Denote x = col(x1, x2, . . . , xN) and u = col(u1, u2, . . . , um). The total network is

then represented by

˙x = (IN⊗ A − L ⊗ B)x + (M ⊗ E)u. (3)

The goal of this paper is to find a reduced order networked system, whose dynamics is a good approximation of the networked system (3). Following [26], the idea to obtain such an approximation is to cluster groups of agents in the network, and to treat each of the resulting clusters as a node in a new, reduced order, network. The reduced order network will again be a leader–follower network, and by the clustering procedure, essential interconnection features of the network will be preserved. We will also require that the synchronization properties of the network are preserved after reduction. We assume that the original network is synchronized, meaning that if the external inputs satisfy u= 0 for  = 1, 2, . . . , m, then for all i, j ∈ V, we have

xi(t) − xj(t) → 0

as t → ∞. We impose that the reduction procedure preserves this property. In this paper, a standing assumption will be that the graphG of the original network is

con-nected. This is equivalent to the condition that 0 is a simple eigenvalue of the Laplacian L, see [21, Theorem 2.8]. In this case, the network reaches synchronization if and only if(L ⊗ In)x(t) → 0 as t → ∞.

In order to be able to compare the original network (3) with its reduced order approximation and to make statements about the approximation error, we need a notion of distance between the networks. One way to obtain such notion is to introduce an

output associated with the network (3). By doing this, both the original network and its approximation become input–output systems, and we can compare them by looking at the difference of their transfer functions. Being a measure for the disagreement between the states of the agents in (3), we choose y = (L ⊗ In)x as the output

of the original network. Indeed, this output y can be considered a measure of the disagreement in the network, in the sense that y(t) is small if and only if the network is close to being synchronized. Thus, with the original system (3) we now identify the input–state–output system:

˙x = (IN⊗ A − L ⊗ B)x + (M ⊗ E)u,

y= (L ⊗ In)x.

(4) The state space dimension of (4) is equal to n N , its number of inputs equals to mr , and the number of outputs is n N .

In this paper, we will use clustering to obtain a reduced order network, i.e., a network with a reduced number of agents, as an approximation of the original network (4).

4 Graph partitions and reduction by clustering

We consider networks whose interaction topologies are represented by weighted graphsG with node set V. The graph of the original network (3) is undirected;

(8)

how-ever, our reduction procedure will lead to networks on directed graphs. As before, the adjacency matrix of the graphG is the matrix A = (ai j), where ai j ≥ 0 is the weight

of the arc from node j to node i . As noted before, the graph is undirected if and only ifA is symmetric.

A nonempty subset C ⊂ V is called a cell or cluster of V. A partition of a graph is defined as follows.

Definition 2 LetG be an undirected graph. A partition π = {C1, C2, . . . , Ck} of V

is a collection of cells such thatV = ki=1Ci and Ci ∩ Cj = ∅ whenever i = j.

When we say thatπ is a partition of G, we mean that π is a partition of the vertex set

V of G. Nodes i and j are called cellmates in π if they belong to the same cell of π.

The characteristic vector of a cell C ⊂ V is the N-dimensional column vector p(C) defined as

pi(C) =



1 if i ∈ C, 0 otherwise,

where pi(C) is the ith entry of p(C). The characteristic matrix of the partition π =

{C1, C2, . . . , Ck} is defined as the N × k matrix

P(π) =p(C1) p(C2) · · · p(Ck)



.

For a given partitionπ = {C1, C2, . . . , Ck}, consider the cells Cpand Cqwith p= q.

For any given node j ∈ Cq, we define its degree with respect to Cpas the sum of the

weights of all arcs from j to i ∈ Cp, i.e., the number

dpq( j) :=



i∈Cp ai j.

Next, we will construct a reduced order approximation of (4) by clustering the agents in the network using a partition ofG. Let π be a partition of G, and let P := P(π) be its characteristic matrix. Extending the main idea in [26], we take as reduced order system the Petrov–Galerkin projection of the original system (4), with the following choice for the matrices V and W :

W = PPTP−1⊗ In∈ Rn N×nk, V = P ⊗ In∈ Rn N×nk.

The dynamics of the resulting reduced order model is then given by

˙ˆx = (Ik⊗ A − ˆL ⊗ B) ˆx + ( ˆM⊗ E)u, ˆy = (L P ⊗ In) ˆx, (5) where ˆL =PTP−1PTL P ∈ Rk×k, ˆ M=PTP−1PTM ∈ Rk×m.

(9)

It can be seen by inspection that the matrix ˆL is the Laplacian of a weighted directed

graph with node set{1, 2, . . . , k}, with k equal to the number of clusters in the partition

π, and adjacency matrix ˆA = (ˆapq), with

ˆapq= 1 |Cp|  j∈Cq dpq( j),

where dpq( j) is the degree of j ∈ Cqwith respect to Cp, and|Cp| the cardinality of

Cp. In other words: in the reduced graph, the edge from node q to node p is obtained

by summing over all j∈ Cqthe weights of all edges to i ∈ Cpand dividing this sum

by the cardinality of Cp. The row sums of ˆL are indeed equal to zero since ˆL1k = 0.

The matrix ˆM ∈ Rk×msatisfies

ˆ Mp j=  1 |Cp| ifvj ∈ Cp, 0 otherwise,

wherev1, v2, . . . , vm are the leader nodes, p= 1, 2, . . . , k, and j = 1, 2, . . . , m.

Clearly, the state space dimension of the reduced order network (5) is equal to nk, whereas the dimensions mr and n N of the input and output have remained unchanged. Thus, we can investigate the error between the original and reduced order network by looking at the difference of their transfer functions. In the sequel, we will investigate both theH2-norm as well as theH∞-norm of this difference.

Before doing this, we will now first study the question whether our reduction pro-cedure preserves synchronization. It is important to note that since, by assumption, the original undirected graph is connected, it has a directed spanning tree. It is easily verified that this property is preserved by our clustering procedure. Then, since the property of having a directed spanning tree is equivalent with 0 being a simple eigen-value of the Laplacian (see [21, Proposition 3.8]), the reduced order Laplacian ˆL has

again 0 as a simple eigenvalue.

Now assume that the original network (4) is synchronized. It is well known, see, e.g., [33], that this is equivalent with the condition that for each nonzero eigenvalueλ of the Laplacian L the matrix A− λB is Hurwitz. Thus, synchronization is preserved if and only if for each nonzero eigenvalue ˆλ of the reduced order Laplacian ˆL the matrix A− ˆλB is Hurwitz.

Unfortunately, in general A−λB Hurwitz for all nonzero λ ∈ σ (L) does not imply that A− ˆλB Hurwitz for all nonzero λ ∈ σ( ˆL). An exception is the “single integrator” case A = 0 and B = 1, where this condition is trivially satisfied, so in this special case synchronization is preserved. Also if we restrict ourselves to a special type of graph partitions, namely almost equitable partitions, then synchronization turns out to be preserved. We will review this type of partition now.

Again, letG be a weighted, undirected graph, and let π = {C1, C2, . . . , Ck} be a

partition ofG. Given two clusters Cpand Cqwith p= q, and a given node j ∈ Cq,

recall that dpq( j) denotes its degree with respect to Cp. We call the partitionπ an

(10)

Fig. 1 A graph from [26] for which the partition

{{1, 2, 3, 4}, {5, 6}, {7}, {8}, {9, 10}} is almost equitable 1 2 3 4 5 6 7 8 9 10 5 2 1 2 5 2 7 6 1 1 1 3 3 6 7

dpq( j) is independent of j ∈ Cq, i.e., dpq( j1) = dpq( j2) for all j1, j2∈ Cq. We refer

to Fig.1for an example of a graph with an AEP.

It is a well-known fact (see [5]) thatπ is an AEP if and only if the image of its characteristic matrix is invariant under the Laplacian.

Lemma 3 Consider the weighed undirected graphG with Laplacian matrix L. Let π

be a partition ofG with characteristic matrix P := P(π). Then, π is an AEP if and only if L im P ⊂ im P.

As an immediate consequence, the reduced Laplacian ˆL resulting from an AEP satisfies L P= P ˆL. Indeed, since im P is L-invariant we have L P = P X for some matrix X.

Obviously, we must then have X =PTP−1PTL P = ˆL. From this, it follows that σ ( ˆL) ⊂ σ(L). It then readily follows that synchronization is preserved if we cluster

according to an AEP:

Theorem 1 Assume that the network (4) is synchronized. Letπ be an AEP. Then, the

reduced order network (5) obtained by clustering according toπ is synchronized. To the best of our knowledge, there is no known polynomial-time algorithm for finding nontrivial AEPs of a given graph, where by “trivial AEPs” we mean the coarsest and the finest partitions ({V} and {{i} : i ∈ V}). There is a polynomial-time algorithm for finding the coarsest AEP which is finer than a given partition (see [35]), but there is no guarantee that it will find a nontrivial AEP. Furthermore, it is not clear whether a given graph has any nontrivial AEPs at all. On the other hand, a graph can have many AEPs, e.g., every partition of a complete unweighted graph is an AEP. Because of this, in Sect.7we consider extensions of our results in Sects.5and6, which are based on AEPs, to arbitrary partitions.

5

H

2

-error bounds

In this section, we will formulate the first main theorem of this paper. The theorem gives an a priori upper bound for theH2-norm of the approximation error in the case

that we cluster according to an AEP. After formulating the theorem, in the remainder of this section we will establish a proof. The proof will use a sequence of separate lemmas, whose proofs can be found in “Appendix.”

(11)

Before stating the theorem, we will now first discuss some important ingredients. Let

S and ˆS denote the transfer functions of the original (4) and reduced order network (5), respectively. We will measure the approximation error by theH2-norm S− ˆS H

2

of these transfer functions. An important role will be played by the N − 1 auxiliary input–state–output systems

˙x = (A − λB)x + Ed,

z= λx, (6)

whereλ ranges over the N − 1 nonzero eigenvalues of the Laplacian L. Let Sλ(s) =

λ(s I − A + λB)−1E be the transfer matrices of these systems. We assume that the

original network (4) is synchronized, so that all of the A−λB are Hurwitz. Let SλH2 denote theH2-norm of Sλ. Recall that the set of leader nodes isVL= {v1, v2, . . . , vm}.

Node vi will be called leader i . This leader is an element of cluster Cki for some ki ∈ {1, 2, . . . , k}. We now have the following theorem:

Theorem 2 Assume that the network (4) is synchronized. Letπ be an AEP of the

graphG. The absolute approximation error when clustering G according to π then

satisfies S− ˆS 2H 2 ≤ (Smax,H2) 2 m  i=1 1− 1 |Cki| , where Cki is the set of cellmates of leader i , and

Smax,H2 := max

λ∈σ(L)\σ( ˆL)SλH2.

Furthermore, the relative approximation error satisfies

S− ˆS 2H 2 S2 H2 ≤ Smax,H2 Smin,H2 2 mi=1  1−|C1 ki|  m1−N1 , where Smin,H2 := min λ∈σ(L)\{0}SλH2.

Remark 1 We see that, with fixed number of agents and fixed number of leaders, the

approximation error is equal to 0 if in each cluster that contains a leader, the leader is the only node in that cluster. In general, the upper bound increases if the numbers of cellmates of the leaders increase. The upper bound also depends multiplicatively on the maximalH2-norm of the auxiliary systems (6) over all Laplacian eigenvalues

in the complement of the spectrum of the reduced Laplacian ˆL. The relative error

in addition depends on the minimalH2-norm of the auxiliary systems (6) over all

(12)

Remark 2 For the special case that the agents are single integrators (so n= 1, A = 0, B = 1, and E = 1) it is easily seen that Smax,H2 =

1

2max{λ | λ ∈ σ(L)\σ( ˆL)}

and Smin,H2 = 1

2min{λ | λ ∈ σ(L), λ = 0}. Thus, in the single integrator case the

corresponding a priori upper bounds explicitly involve the Laplacian eigenvalues. As already noted in Sect.1, the single integrator case was also studied in [26] for the slightly different setup that the output equation in the original network (4) is taken as

y = (W12RT ⊗ In)x instead of y = (L ⊗ In)x. Here, R is the incidence matrix of

the graph and W the diagonal matrix with the edge weights on the diagonal (in other words, L = RW RT). It was shown in [26] that in that case the absolute and relative approximation errors even admit the explicit formulas

S − ˆS2 H2 = 1 2 m  i=1 1− 1 |Cki| , and S − ˆS2 H2 S2 H2 = m i=1  1−|C1 ki|  m1−N1 .

In the remainder of this section, we will establish a proof of Theorem2. Being rather technical, most of the proofs will the deferred to “Appendix.” As a first step, we establish the following lemma (see also [26], where only the single integrator case was treated):

Lemma 4 Letπ be an AEP of the graph G. The approximation error when clustering

G according to π then satisfies

S− ˆS 2H 2 = S 2 H2− ˆS 2 H2.

Proof See “Appendix D.” 

Recall that, sinceπ is an AEP, we have σ( ˆL) ⊂ σ (L). Label the eigenvalues of

L as 0, λ2, λ3, . . . , λN in such a way that 0, λ2, λ3, . . . , λk are the eigenvalues of ˆL.

Also, without loss of generality, we assume thatπ is regularly formed, i.e., all ones in each of the columns of P(π) are consecutive. One can always relabel the agents in the graph in such a way that this is achieved. For simplicity, we again denote P(π) by

P. Consider now the symmetric matrix

¯L :=PTP 1 2 ˆLPTP− 1 2 =PTP− 1 2PTL PPTP− 1 2. (7)

Note that the eigenvalues of ¯L and ˆL coincide. Let ˆU be an orthogonal matrix that

diagonalizes ¯L. We then have

ˆUT ¯L ˆU = diag(0, λ , . . . , λ ) =: ˆΛ.

(13)

Next, take U1= P



PTP− 1

2 ˆU. The columns of U

1form an orthonormal set:

U1TU1= ˆUT



PTP− 1

2PTPPTP−12 ˆU = ˆUT ˆU = I.

Furthermore, we have that

U1TLU1= ˆUT ¯L ˆU = ˆΛ.

Now choose U2such that U=



U1 U2



is an orthogonal matrix and

Λ := UTLU= ˆΛ 0

0 Λ¯

, (9)

where ¯Λ = diag(λk+1, . . . , λN). It is easily verified that the first column of U1, and

thus the first column of U , is given by √1

N1N, where1Nis the N -vector of 1’s, a fact

that we will use in the remainder of this paper.

Using the above, we will now first establish explicit formulas for theH2-norms

of S and ˆS separately. The following lemma gives a formula for theH2-norm of the

original transfer function S:

Lemma 5 Let U be as in (9). For i= 2, . . . , N, let Xibe the observability Gramian of

the auxiliary system(A −λiB, E, λiI) in (6), i.e., the unique solution of the Lyapunov

equation(A − λiB)TXi+ Xi(A − λiB) + λ2iI = 0. Then, the H2-norm of S is given

by: S2 H2 = tr  UTM MTU⊗ I  diag(0, ETX2E, . . . , ETXNE)  . (10)

Proof See “Appendix E.” 

We proceed with finding a formula for theH2-norm for the reduced system. This

will be dealt with in the following lemma:

Lemma 6 Let ˆU be as in (8) above. For i = 2, . . . , k, let Xi be the observability

Gramian of the auxiliary system(A − λiB, E, λiI) in (6), i.e., the unique solution of

the Lyapunov equation(A − λiB)TXi+ Xi(A − λiB) + λ2iI = 0. Then, the H2-norm

of ˆS is given by:  ˆS2 H2 = tr ˆUT PTP 1 2M ˆˆMTPTP 1 2 ˆU ⊗ I × diag0, ETX2E, . . . , ETXkE  . (11)

Proof See “Appendix F.” 

(14)

Proof of Theorem2 Using Lemma4, and formulas (10) and (11), we compute S− ˆS 2 H2 = tr  UTM MTU⊗ I  diag  0, ETX2E, . . . , ETXNE  − tr ˆUT PTP 1 2M ˆˆMTPTP 1 2 ˆU ⊗ I × diag0, ETX2E, . . . , ETXkE  = tr U1TM MTU1 U1TM MTU2 U2TM MTU1 U2TM MTU2 ⊗ I × diag0, ETX2E, . . . , ETXNE  − trU1TM MTU1⊗ I  diag  0, ETX2E, . . . , ETXkE  = trU2TM MTU2⊗ I  diag  ETXk+1E, . . . , ETXNE  , (12)

where the second equality follows from the fact that

ˆ MTPTP 1 2 ˆU = MTPPTP−1PTP 1 2 ˆU = MTPPTP−12 ˆU = MTU 1.

Next, observe that (12) can be rewritten as

S− ˆS 2 H2 = tr  U2TM MTU2⊗ I  diag  ETXk+1E, . . . , ETXNE  = trU2TM MTU2  diag  tr  ETXk+1E  , . . . , trETXNE  = trU2TM MTU2  diag  Sλk+12H2, . . . , SλN 2 H2  ,

where Sλj for j = k + 1, . . . , N is the transfer function of the auxiliary system (6).

An upper bound for this expression is given by tr  U2TM MTU2  diag  Sλk+12H2, . . . , SλN 2 H2  ≤ (Smax,H2) 2 tr  U2TM MTU2  ,

where Smax,H2 = maxk+1≤ j≤NSλjH2. Furthermore, we have

tr  U2TM MTU2  = trUTM MTU  − trU1TM MTU1  = m − trPPTP−1PTM MT  .

(15)

Since, by assumption, the partitionπ is regularly formed, PPTP−1PT is a block diagonal matrix of the form

PPTP−1PT = diag(P1, P2, . . . , Pk).

It is easily verified that each Pi is a|Ci| × |Ci| matrix whose elements are all equal

to|C1

i|. The matrix M M

T is a diagonal matrix whose diagonal entries are either 0 or

1. We then have that the i th column of PPTP−1PTM MT is either equal to the i th column of PPTP−1PT if agent i is a leader, or zero otherwise. It then follows that the diagonal elements of PPTP−1PTM MT are either zero or 1

|Cki| if i is part of

the leader set, where Cki is the cell containing agent i . Hence, we have

tr  U1TM MTU1  = m  i=1 1 |Cki| , and consequently, tr  U2TM MTU2  = m − m  i=1 1 |Cki| . In conclusion, we have S− ˆS 2 H2 ≤ (Smax,H2) 2 m  i=1 1− 1 |Cki| ,

which completes the proof of the first part of the theorem.

We now prove the statement about the relative error. For this, we will establish a lower bound forS2H

2. By (10), we have S2 H2 = tr  UTM MTU⊗ I  diag  0, ETX2E, . . . , ETXNE  = trUTM MTU  diag  0, tr  ETX2E  , . . . , trETXNE  . (13)

The first column of U spans the eigenspace corresponding to the eigenvalue 0 of L and hence must be equal to u1= √1N1N. Let ¯U be such that U =



u1 ¯U. It is then

easily verified using (13) that

S2 H2 = tr  ¯UT M MT ¯U  diag  tr  ETX2E  , . . . , trETXNE  = tr¯UT M MT ¯U  diag  Sλ2 2 H2, . . . , SλN 2 H2  . Finally, since tr 

¯UTM MT ¯U= trMT ¯U ¯UTM= trMTU UT − u 1uT



M



(16)

we obtain thatS2H

2 ≥ m



1−N1(Smin,H2) 2

. This then yields the upper bound for

the relative error as claimed. 

Remark 3 Note that by our labeling of the eigenvalues of L, in the formulation of

Theorem2, we have thatσ(L)\σ( ˆL) is equal to {λk+1, . . . , λN} used in the proof. We

stress that this should not be confused with the notation often used in the literature, where theλis are labeled in increasing order.

6

H

-error bounds

Whereas in the previous section we studied a priori upper bounds for the approximation error in terms of theH2-norm, the present section aims at expressing the approximation

error in terms of theH-norm. This section consists of two subsections. In the first subsection, we consider the special case that the agent dynamics is a single integrator system. Here, we obtain an explicit formula for the H-norm of the error. In the second subsection, we find an upper bound for theH-error for symmetric systems.

6.1 The single integrator case

Here, we consider the special case that the agent dynamics is a single integrator system. In this case, we have A= 0, B = 1, and E = 1 and the original system (4) reduces to

˙x = −Lx + Mu,

y= Lx. (14)

The state space dimension of (14) is then simply N , the number of agents. For a given partitionπ = {C1, C2, . . . , Ck}, the reduced system (5) is now given by

˙ˆx = − ˆL ˆx + ˆMu, ˆy = L P ˆx,

where P = P(π) is again the characteristic matrix of π and ˆx ∈ Rk. The transfer

functions S and ˆS, of the original and reduced system, respectively, are given by S(s) = L(s IN+ L)−1M,

ˆS(s) = L Ps Ik+ ˆL

−1 ˆ

M.

The first main result of this section is the following explicit formula for theH-model reduction error. It complements the formula for theH2-error obtained in [26] (see also

Remark2):

Theorem 3 Letπ be an AEP of the graph G. If the network with single integrator

(17)

S− ˆS 2H ∞ = ⎧ ⎨ ⎩ max 1≤i≤m  1−|C1 ki| 

if the leaders are in different cells,

1 otherwise,

where, for some ki ∈ {1, 2, . . . , k}, Ckiis the set of cellmates of leader i . Furthermore,

SH = 1, hence the relative and absolute H-errors coincide.

Remark 4 We see that theH-error lies in the interval[0, 1]. The error is maximal (= 1) if and only if two or more leader nodes occupy one and the same cell. The error is minimal (= 0) if and only if each leader node occupies a different cell, and is the only node in this cell. In general, the error increases if the number of cellmates of the leaders increases.

Proof of Theorem3 To simplify notation, denoteΔ(s) = S(s) − ˆS(s). Note that both

S and ˆS have all poles in the open left half plane. We now first show that, sinceπ is

an AEP, we have

ΔH = σ1(Δ(0)). (15)

First note that ˆS(s) = L PPTP− 1 2(s I k+ ¯L)−1  PTP 1

2M, where the symmetricˆ

matrix ¯L is given by (7). Thus, a state space representation for the error system is given by ˙xe= −L 0 0 − ¯L xe+  M  PTP12Mˆ  u, e=  L −L PPTP− 1 2  xe. (16)

Next, we show that (15) holds by applying Lemma1 to system (16). Indeed, with

X = −L, we have  L −L PPTP−12 −L 0 0 − ¯L =−L2 L PPTP−12 ¯L  =−L2 L P ˆLPTP−12  =−L2 L2PPTP−12  = XL −L PPTP− 1 2  ,

and from Lemma1it then immediately follows thatΔH = σ1(Δ(0)). To compute

σ1(Δ(0)), we apply Lemma2to system (16). First, it is easily verified that

ker −L 0 0 − ¯L ⊂ kerL −L PPTP−12  .

By applying Lemma2we then obtain

Δ(0) =L −L PPTP−12  L 0 0 ¯L + M  PTP12Mˆ 

(18)

= L

L+− PPTP−12 ¯L+PTP−12PT

M. (17)

Recall that ˆU in (8) is an orthogonal matrix that diagonalizes ¯L and that U1 =

PPTP−21 ˆU. Then, ¯L+= ˆU ˆΛ+ˆUT. Thus, we have

PPTP− 1 2 ¯L+PTP−12PT = U 1Λˆ+U1T. Next, we compute L L+= UΛUTUΛ+UT = UΛΛ+UT = IN− 1 N1N1 T N, (18)

where the last equality follows from the fact that the first column of U is√1

N1N. Now observe that LU1Λˆ+U1T = UΛU T U1Λˆ+U1T = U1Λ ˆˆΛ+U1T = U1U1T − 1 N1N1 T N = PPTP−1PT − 1 N1N1 T N. (19)

Combining (18) and (19) with (17), we obtain

Δ(0) =IN− P



PTP−1PT



M.

From (15) then, we have that theH-error is given by

S− ˆS 2H= λmax  Δ(0)TΔ(0) = λmax MT  IN− P  PTP−1PT 2 M = λmax  Im− MTP  PTP−1PTM  = 1 − λmin  MTPPTP−1PTM  . (20)

All that is left is to compute the minimal eigenvalue of MTPPTP−1PTM. Again,

let{v1, v2, . . . , vm} be the set of leaders and note that M satisfies

M =ev1 ev2 · · · evm.

Again, without loss of generality, assume thatπ is regularly formed. Then, the matrix

(19)

whose entries are all|C1

i|. Let ki ∈ {1, 2, . . . , k} be such that vi ∈ Cki. If all the leaders

are in different cells, then

MTPPTP−1PTM = diag 1 |Ck1| ,|C1 k2| , . . . ,|C1 km| , and so λmin  MTPPTP−1PTM  = min 1≤i≤m 1 |Cki| . (21)

Now suppose that two leadersvi andvj are cellmates. Then, we have

MTPPTP−1PTM(ei− ej) = MTP



PTP−1PT(evi − evj) = 0.

which together with MTPPTP−1PTM ≥ 0 implies λmin



MTPPTP−1PTM



= 0. (22)

From (20), (21), and (22), we find the absoluteH∞-error. To find the relativeH∞ -error, we computeSH by applying Lemmas1and2to the original system (14). Combined with (18), this results in theH∞-norm of the original system:

S2 H= λmax  S(0)TS(0)= λ max MT IN− 1 N1N1 T N M = 1.

This completes the proof. 

6.2 The general case with symmetric agent dynamics

In this subsection, we return to the general case that the agent dynamics is given by an arbitrary multivariable input–state–output system. Thus, the original and reduced networks are again given by (4) and (5), respectively. As in the proof of Theorem3, we will rely heavily on Lemma2to compute theH-error. Since Lemma2 relies on a symmetry argument, we will need to assume that the matrices A and B are both symmetric, which will be a standing assumption in the remainder of this section.

We will now establish an a priori upper bound for theH-norm of the approx-imation error in the case that we cluster according to an AEP. Again, an important role is played by the N − 1 auxiliary systems (6) withλ ranging over the nonzero eigenvalues of the Laplacian L. Again, let Sλ(s) = λ(s I − A + λB)−1E be their

transfer functions. We assume that the original network (4) is synchronized, so that all of the A− λB are Hurwitz. We again use S, ˆS, and Δ to denote the relevant transfer functions.

(20)

Theorem 4 Assume the network (4) is synchronized and that A and B are symmetric

matrices. Letπ be an AEP of the graph G. The H∞-error when clusteringG according toπ then satisfies S− ˆS 2 H∞ ≤ ⎧ ⎨ ⎩ (Smax,H)2 max 1≤i≤m  1−|C1 ki| 

if the leaders are in different cells,

(Smax,H)2 otherwise, and S− ˆS 2HS2 H∞ ≤ ⎧ ⎪ ⎨ ⎪ ⎩  Smax,H∞ Smin,H∞ 2 max 1≤i≤m  1−|C1 ki| 

if the leaders are in different cells,

 Smax,H∞ Smin,H∞ 2 otherwise, where Smax,H := max λ∈σ(L)\σ( ˆL)SλH, (23) and Smin,H := min λ∈σ(L)\{0}σmin(Sλ(0)), (24)

with Sλthe transfer functions of the auxiliary systems (6).

Remark 5 The absoluteH-error thus lies in the interval[0, Smax,H] with Smax,H

the maximum over theH-norms of the transfer functions Sλwithλ ∈ σ (L)\σ( ˆL). The error is minimal (= 0) if each leader node occupies a different cell, and is the only node in this cell. In general, the upper bound increases if the number of cellmates of the leaders increases.

Proof of Theorem4 First note that the transfer function ˆS of the reduced network (5) is equal to ˆS(s) = L PPTP− 1 2 ⊗ I n  s I− Ik⊗ A + ¯L ⊗ B −1  PTP 1 2Mˆ ⊗ E , (25) with the symmetric matrix ¯L given by (7). Analogous to the proof of Theorem3, we first apply Lemma1to the error system

˙xe= IN⊗ A − L ⊗ B 0 0 Ik⊗ A − ¯L ⊗ B xe+  M⊗ E  PTP12Mˆ ⊗ E  u, e=  L⊗ In −L P  PTP−12 ⊗ I n  xe,

with transfer functionΔ. Take X = IN⊗ A − L ⊗ B. We then have

 L⊗ In −L P  PTP− 1 2 ⊗ I n  I N⊗ A − L ⊗ B 0 0 I ⊗ A − ¯L ⊗ B

(21)

= XL⊗ In −L P  PTP− 1 2 ⊗ I n  .

From Lemma1, we thus obtain that

ΔH= σ1(Δ(0)) = λmax  Δ(0)TΔ(0) 1 2 .

In the proof of Lemma4, it was shown that

ˆS(−s)TΔ(s) = ˆS(−s)T(S(s) − ˆS(s)) = 0.

Since all transfer functions involved are stable, in particular this holds for s= 0. We then have that ˆS(0)T(S(0)− ˆS(0)) = 0, i.e., ˆS(0)TS(0) = ˆS(0)TˆS(0). By transposing,

we also have S(0)T ˆS(0) = ˆS(0)TˆS(0). Therefore,

Δ(0)TΔ(0) =S(0) − ˆS(0)T

(S(0) − ˆS(0))

= S(0)TS(0) − S(0)TˆS(0) − ˆS(0)TS(0) + ˆS(0)TˆS(0)

= S(0)TS(0) − ˆS(0)TˆS(0).

By applying Lemma2to system (4), we obtain

S(0)TS(0) =  MT ⊗ ET  (IN⊗ A − L ⊗ B)+  L2⊗ In  × (IN⊗ A − L ⊗ B)+(M ⊗ E) =MT ⊗ ET  (U ⊗ In)(IN⊗ A − Λ ⊗ B)+  Λ2⊗ I n  × (IN⊗ A − Λ ⊗ B)+  UT ⊗ In  (M ⊗ E) =MTU⊗ ET  diag  0, λ22(A − λ2B)−2, . . . , λ2N(A − λNB)−2  × (UT M⊗ E) =MTU⊗ Ir  diag  0, Sλ2(0) T Sλ2(0), . . . , SλN(0) T SλN(0)  ×UTM⊗ Ir  , (26) where Sλ is again given by (6). Recall that ˆM = PTP−1PTM and U1 =

PPTP− 1

(22)

ˆS(0)T ˆS(0) = MTPPTP−12 ⊗ ET I N⊗ A − ¯L ⊗ B + × PTP− 1 2PTL2PPTP− 1 2 ⊗ I n ×IN⊗ A − ¯L ⊗ B +  PTP− 1 2PTM⊗ E = MTPPTP− 1 2 ⊗ ET ˆU ⊗ I n  IN⊗ A − ˆΛ ⊗ B + ×Λˆ2⊗ I n  IN⊗ A − ˆΛ ⊗ B + × ˆUT ⊗ In   PTP− 1 2PTM⊗ E =MTU1⊗ ET  × diag0, λ22(A − λ2B)−2, . . . , λ2k(A − λkB)−2  U1TM⊗ E  =MTU1⊗ Ir  × diag0, Sλ2(0) T Sλ2(0), . . . , Sλk(0) T Sλk(0)  U1TM⊗ Ir  .

Combining the two expressions above, it immediately follows that

Δ(0)TΔ(0) = S(0)TS(0) − ˆS(0)TˆS(0) =MTU2⊗ Ir  × diagSλk+1(0)TSλk+1(0), . . . , SλN(0) T SλN(0)  ×U2TM⊗ Ir  .

By taking Smax,Has defined by (23) it then follows that

Δ(0)TΔ(0) ≤

MTU2⊗ Ir



diag((Smax,H)2Ir, . . . , (Smax,H)2Ir)

×U2TM⊗ Ir  = (Smax,H)2  MTU2U2TM⊗ Ir  = (Smax,H)2  MT(IN− U1U1T)M ⊗ Ir  = (Smax,H)2  Im− MTP  PTP−1PTM  ⊗ Ir  .

Continuing as in the proof of Theorem3, we find an upper bound for theH∞-error:

Δ2 H ≤ (S ,H )2λmax  Im− MTP  PTP−1PTM  .

(23)

To compute an upper bound for the relativeH-error, we bound theH-norm of system (4) from below. Again, let ¯U be such that U =u1 ¯U and let Smin,H be as

defined by (24). From (26) it now follows that

S(0)TS(0) =MT ¯U ⊗ I r  diag  Sλ2(0) TS λ2(0), . . . , SλN(0) TS λN(0)  ׯUT M ⊗ Ir  ≥MT ¯U ⊗ Ir  diag  (Smin,H)2Ir, . . . , (Smin,H)2Ir  ׯUT M ⊗ Ir  = (Smin,H)2  MT ¯U ¯UTM⊗ Ir  = (Smin,H)2 MT IN− 1 N1N1 T N M⊗ Ir .

Again using Lemma2, we find a lower bound to theH-norm of S:

S2

H= λmax



S(0)TS(0)≥ (S

min,H)2,

which concludes the proof of the theorem. 

7 Toward a priori error bounds for general graph partitions

Up to now, we have only dealt with establishing error bounds for network reduction by clustering using almost equitable partitions of the network graph. Of course, we would also like to obtain error bounds for arbitrary, possibly non almost equitable, partitions. In this section, we present some ideas to address this more general problem. We will first study the single integrator case. Subsequently, we will look at the general case.

7.1 The single integrator case Consider the multi-agent network

˙x = −Lx + Mu,

y= Lx. (27)

As before, assume that the underlying graph G is connected. The network is then synchronized. Letπ = {C1, C2, . . . , Ck} be a graph partition, not necessarily an AEP,

and let P = P(π) ∈ RN×kbe its characteristic matrix. As before, the reduced order network is taken to be the Petrov–Galerkin projection of (27) and is represented by

(24)

˙ˆx = − ˆL ˆx + ˆMu,

ˆy = L P ˆx, (28)

Again, let S and ˆS be the transfer functions of (27) and (28), respectively. We will address the problem of obtaining a priori upper bounds for S − ˆS H

2 and S

ˆS

H∞. We will pursue the following idea: as a first step we will approximate the

original Laplacian matrix L (of the original network graphG) by a new Laplacian matrix, denoted by LAEP (corresponding to a “nearby” graph GAEP) such that the

given partitionπ is an AEP for this new graph GAEP. This new graphGAEPdefines a

new multi-agent system with transfer function SAEP(s) = LAEP(s IN+ LAEP)−1M.

The reduced order network of SAEP(using the AEPπ) has transfer function ˆSAEP(s) =

LAEPP



s Ik + ˆLAEP

−1 ˆ

M. Then, using the triangle inequality, both for p = 2 and p= ∞, we have

S− ˆS H

p = S− SAEP+ SAEP− ˆSAEP+ ˆSAEP− ˆS Hp

≤ S − SAEPHp + SAEP− ˆSAEP Hp + ˆSAEP− ˆS Hp.

(29)

The idea is to obtain a priori upper bounds for all three terms in (29). We first propose an approximating Laplacian matrix LAEP, and subsequently study the problems of

establishing upper bounds for the three terms in (29) separately. For a given matrix M, let MF := tr



MTM12 denote its Frobenius norm. In

the following, denoteP := PPTP−1PT. Note thatP is the orthogonal projector onto im P. As approximation for L, we compute the unique solution to the convex optimization problem minimize LAEP L − LAEP2F, subject to (IN− P)LAEPP = 0, LAEP = LTAEP, LAEP ≥ 0, LAEP1N = 0. (30)

In other words, we want to compute a positive semi-definite matrix LAEP with row

sums equal to zero, and with the property that im P is invariant under LAEP

(equiv-alently, the given partitionπ is an AEP for the new graph). We will show that such an LAEP may correspond to an undirected graph with negative weights. However, it

is constrained to be positive semi-definite, so the results of Sects.4,5, and6in this paper will remain valid.

Theorem 5 The matrix LAEP := P LP + (IN− P)L(IN− P) is the unique solution

to the convex optimization problem (30). If L corresponds to a connected graph, then,

in fact, ker LAEP= im 1N.

Proof Clearly, LAEP is symmetric and positive semi-definite since L is. Also,

(25)

P1N = 1N. We now show that LAEP uniquely minimizes the distance to L. Let X

satisfy the constraints and defineΔ = LAEP− X. Then, we have

L − X2

F = L − LAEP2F+ Δ2F+ 2 tr((L − LAEP)Δ).

It can be verified that L− LAEP= (IN− P)LP + P L(IN− P). Thus,

tr((L − LAEP)Δ) = tr((IN− P)LPΔ) + tr(P L(IN− P)Δ).

Now, since both X and LAEPsatisfy the first constraint, we have(IN− P)ΔP = 0.

Using this we have

tr((IN− P)LPΔ) = tr(PΔ(IN− P)L) = tr(L(IN− P)ΔP) = 0.

Also, tr(P L(IN− P)Δ) = tr(L(IN− P)ΔP) = 0. Thus, we obtain L − X2 F = L − LAEP2F+ Δ 2 F,

from which it follows thatL − XF is minimal if and only ifΔ = 0, equivalently,

X = LAEP.

To prove the second statement, let x ∈ ker LAEP, so xTLAEPx = 0. Then, both

xTP LPx = 0 and xT(IN− P)L(IN − P)x = 0. This clearly implies LPx = 0

and L(IN − P)x = 0. Since L corresponds to a connected graph, we must have

Px ∈ im 1N and(IN− P)x ∈ im 1N. We conclude that x∈ im 1N, as desired. 

As announced above, LAEP may have positive off-diagonal elements, corresponding

to a graph with some of its edge weights being negative. For example, for

L = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 1 −1 0 0 0 −1 2 −1 0 0 0 −1 2 −1 0 0 0 −1 2 −1 0 0 0 −1 1 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠, P = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 1 0 1 0 1 0 0 1 0 1 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠, we have LAEP = ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ 11 9 − 7 9 − 1 9 0 − 1 3 −7 9 20 9 − 10 9 0 − 1 3 −1 9 − 10 9 14 9 − 1 2 1 6 0 0 −12 32 −1 −1 3 − 1 3 1 6 −1 3 2 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠,

so the edge between nodes 3 and 5 has a negative weight. Figure2shows the graphs corresponding to L and L . Although L is not necessarily a Laplacian matrix

(26)

1 2 3 4 5 1 1 1 1 1 2 3 4 5 7 9 1 3 10 9 1 3 16 12 1 1 9

Fig. 2 A path graph on 5 vertices and its closest graph such that the partition{{1, 2, 3}, {4, 5}} is almost equitable

with only nonpositive off-diagonal elements, it has all the properties we associate with a Laplacian matrix. Specifically, it can be checked that all results in this paper remain valid, since they only depend on the symmetric positive semi-definiteness of the Laplacian matrix.

Using the approximating Laplacian LAEP = P LP +(IN−P)L(IN−P) as above,

we will now deal with establishing upper bounds for the three terms in (29). We start off with the middle term SAEP− ˆSAEP H

p in (29).

According to Remark2, for p= 2 this term has an upper bound depending on the maximalλ ∈ σ (LAEP)\σ( ˆLAEP), and on the number of cellmates of the leaders with

respect to the partitioningπ. For p = ∞, in Theorem3this term was expressed in terms of the maximal number of cellmates with respect to the partitioningπ (noting that it is equal to 1 in case two or more leaders share the same cell).

Next, we will take a look at the first and third term in (29), i.e.,S − SAEPHp and

ˆS− ˆSAEP H

p. Let us denoteΔL = L − LAEP. We find S(s) − SAEP(s) = L(s IN+ L)−1M− LAEP(s IN+ LAEP)−1M

= L(s IN+ L)−1M − LAEP  (s IN+ L)−1+ (s IN+ LAEP)−1ΔL(s IN+ L)−1  M = L(s IN+ L)−1M− LAEP(s IN+ L)−1M − LAEP(s IN+ LAEP)−1ΔL(s IN+ L)−1M = ΔL(s IN+ L)−1M− LAEP(s IN+ LAEP)−1ΔL(s IN+ L)−1M =IN− LAEP(s IN+ LAEP)−1  ΔL(s IN+ L)−1M.

Thus, both for p= 2 and p = ∞, we have

S − SAEPHp IN− LAEP(s IN+ LAEP)−1 H

ΔL(s IN+ L)−1M Hp

(27)

It is also easily seen that ˆLAEP =  PTP−1PTLAEPP =  PTP−1PTL P= ˆL and LAEPP = P  PTP−1PTL P = P ˆL. Therefore, ˆS(s) − ˆSAEP(s) = L P  s IN+ ˆL −1 ˆ M − LAEPP  s IN+ ˆLAEP −1 ˆ M = L Ps IN+ ˆL −1 ˆ M − P ˆLs IN+ ˆL −1 ˆ M =L P− P ˆLs IN+ ˆL −1 ˆ M.

Since, finally,(L P − P ˆL)T(L P − P ˆL) = PT(ΔL)2P, for p= 2 and p = ∞, we

obtain ˆS− ˆSAEP H p = ΔL P(s IN+ ˆL)−1Mˆ Hp . (32)

Thus, both in (31) and (32) the upper bound involves the differenceΔL = L − LAEP

between the original Laplacian and its optimal approximation in the set of Laplacian matrices for which the given partitionπ is an AEP. In a sense, the difference ΔL measures how farπ is away from being an AEP for the original graph G. Obviously,

ΔL = 0 if and only if π is an AEP for G. In that case only the middle term in (29) is present.

7.2 The general case

In this final subsection, we will put forward some ideas to deal with the case that the agent dynamics is a general linear input–state–output system and the given graph partitionπ, with characteristic matrix P, is not almost equitable. In this case, the original network is given by (4) and the reduced network by (5). Their transfer functions are S and ˆS, respectively. Let LAEPand ˆLAEPas in the previous subsection and let

SAEP(s) = (LAEP⊗ In)(s I − IN⊗ A + LAEP⊗ B)−1(M ⊗ E)

and ˆSAEP(s) = (LAEPP⊗ In)  s I− Ik⊗ A + ˆLAEP⊗ B −1 ( ˆM⊗ E).

As before, we assume that (4) is synchronized, so S is stable. However, since the partitionπ is no longer assumed to be an AEP, the reduced transfer function ˆS need not be stable anymore. Also, SAEP and ˆSAEP need not be stable. We will now first

study under what conditions these are stable. First note that ˆS is stable if and only if A− ˆλB is Hurwitz for all nonzero eigenvalues ˆλ of ˆL. Moreover, SAEPand ˆSAEPare

stable if and only if A− λB is Hurwitz for all nonzero eigenvalues λ of LAEP. In the

following, letλmin(L) and λmax(L) denote the smallest nonzero and largest eigenvalue

of L, respectively. We have the following lemma about the location of the nonzero eigenvalues of ˆL and LAEP:

Lemma 7 All nonzero eigenvalues of ˆL and of LAEP lie in the closed interval

(28)

Proof The claim about the eigenvalues of ˆL follows from the interlacing property (see,

e.g., [13]). Next, note thatP = Q1QT1, with Q1= P(PTP)− 1

2. Since the columns

of Q1are orthonormal, there exists a matrix Q2∈ RN×(N−r)such that



Q1 Q2



is an orthogonal matrix. Then, we have IN− P = Q2QT2 and we find

LAEP= P LP + (IN− P)L(IN− P) = Q1QT1L Q1QT1 + Q2Q2TL Q2Q2T =Q1 Q2  QT 1L Q1 0 0 QT 2L Q2 QT1 QT 2 .

It follows thatσ(LAEP) = σ(QT1L Q1) ∪ σ(QT2L Q2). By the interlacing property,

both the eigenvalues of QT1L Q1and QT2L Q2are interlaced with the eigenvalues of

L, so in particular we have that all eigenvaluesλ of LAEP satisfyλ ≤ λmax(L). In

order to prove the lower bound, note that Q1TL Q1is similar to ˆL, for which we know

that its nonzero eigenvalues are between the nonzero eigenvalues of L. As for the eigenvalues of QT2L Q2, note that1TQ2= 0 and Q2x2= x2for all x. Thus, we

find min x2=1 xTQT2L Q2x≥ min 1Ty=0 y2=1 yTL y.

Therefore, the smallest eigenvalue of QT2L Q2 is larger than the smallest positive

eigenvalue of L. We conclude that indeedλ ≥ λmin(L) for all nonzero eigenvalues λ

of LAEP. 

Using this lemma, we see that a sufficient condition for ˆS, SAEP, and ˆSAEPto be stable

is that for eachλ ∈ [λmin(L), λmax(L)], the strict Lyapunov inequality

(A − λB)X + X(A − λB)T < 0

has a positive definite solution X . This sufficient condition can be checked by verifying solvability of a single linear matrix inequality, whose size does not depend on the number of agents, see [31]. After having checked this, it would then remain to establish upper bounds for the first and third term in (29). This can be done in an analogous way as in the previous subsection. Specifically, it can be shown that for p = 2 and

p= ∞ we have S − SAEPHp ≤  1+ (LAEP⊗ In)(s I − IN⊗ A + LAEP⊗ B)−1(IN ⊗ B) H  × (ΔL⊗ In)(s I − IN ⊗ A + L ⊗ B)−1(M ⊗ E) H p and ˆS− ˆSAEP Hp = (ΔL P ⊗ In)  s I− Ik⊗ A + ˆL ⊗ B −1 ( ˆM⊗ E) Hp .

(29)

Fig. 3 Ratios ofH2(left) andH∞(right) upper bounds and corresponding true errors, for a fixed almost

equitable partition and all possible sets of leaders. In both figures, the sets of leaders are sorted such that the ratio is increasing (in particular, the ordering of the sets of leaders is not the same)

8 Numerical examples

To illustrate the error bounds we have established in this paper, consider the graph with 10 nodes taken from [26], as shown in Fig.1. Its Laplacian matrix is

L= ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ 5 0 0 0 0−5 0 0 0 0 0 5 0 0−3 −2 0 0 0 0 0 0 6−1 −2 −3 0 0 0 0 0 0−1 6 −5 0 0 0 0 0 0−3 −2 −5 25 −2 −6 −7 0 0 −5 −2 −3 0 −2 25 −6 −7 0 0 0 0 0 0−6 −6 15 −1 −1 −1 0 0 0 0−7 −7 −1 15 0 0 0 0 0 0 0 0−1 0 1 0 0 0 0 0 0 0−1 0 0 1 ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ ,

with spectrum (rounded to three significant digits)

σ(L) ≈ {0, 1, 1.08, 4.14, 5, 6.7, 8.36, 16.1, 28.2, 33.5}.

First, we illustrate theH2andH∞error bounds from Theorems2and4. We take

π = {{1, 2, 3, 4}, {5, 6}, {7}, {8}, {9, 10}} and A=  0.5 0 0 0.5  , B = E =1 0 0 1  .

Note that, indeed,π is an AEP. Also, in order to satisfy the assumptions of Theorem4, we have taken A and B symmetric. Note that A−λB is Hurwitz for all nonzero eigen-valuesλ of the Laplacian matrix L. Therefore, the multi-agent system is synchronized. It remains to choose the set of leadersVL. For demonstration, we compute theH2and

H∞upper bounds and the true errors for all possible choices ofVL. Since the sets of

leaders are nonempty subsets ofV, it follows that there are 210− 1 = 1023 possible sets of leaders. Figure3shows all the ratios of upper bounds and corresponding true errors, where we define 00:= 1. We see that in this example, all true errors and upper bounds are within one order of magnitude, and that in most cases the ratio is below 2. Next, we compare the true errors with the triangle inequality-based error bounds from (29) for a fixed set of leaders and all possible partitions consisting of five cells.

Referenties

GERELATEERDE DOCUMENTEN

Fortuijn, Carton &amp; Feddes (2005) hebben een voor-nastudie uitgevoerd naar de aanleg van kruispuntplateaus op 29 voorrangskruispunten op provinciale wegen buiten de bebouwde kom

Comparison of theory and experiment for the normalized frictional force as a function of the shear angle for different workpiece materials and various cutting

Monotonicity of the throughput of a closed Erlang queueing network in the number of jobs.. Citation for published

Men trekt in  ABC de hoogtelijn CD op AB en beschrijft op CD als middellijn een cirkel. Deze cirkel snijdt AC in E en BC

The methods developed in this thesis address three global problems: (1) the efficient and accurate reduction of multi-terminal circuits, (2) the appropriate synthesis of the

In this section, we apply the control laws discussed above to the flocking control of a group of N unicycles. Here, we study the case where each robot can directly obtain its

This could also be a potential treatment option in patients with PSMA expressing metastatic breast cancer lesions.. To analyze this, further research is necessary to determine

Increasing evidence points to the idea that current-day dwarf early-type galax- ies originated from transformed late-type galaxies that underwent pre-processing in groups after