Distributed Control, Optimization, Coordination of Smart Microgrids
Silani, Amirreza
DOI:
10.33612/diss.156215621
IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you wish to cite from it. Please check the document version below.
Document Version
Publisher's PDF, also known as Version of record
Publication date: 2021
Link to publication in University of Groningen/UMCG research database
Citation for published version (APA):
Silani, A. (2021). Distributed Control, Optimization, Coordination of Smart Microgrids: Passivity, Output Regulation, Time-Varying and Stochastic Loads. University of Groningen.
https://doi.org/10.33612/diss.156215621
Copyright
Other than for strictly personal use, it is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license (like Creative Commons).
Take-down policy
If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim.
Downloaded from the University of Groningen/UMCG research database (Pure): http://www.rug.nl/research/portal. For technical reasons the number of authors shown on this cover page is limited to 10 maximum.
C
H
A
P
T
E
R
2
Preliminaries and Modeling
In this chapter, we briefly recall for the readers’ convenience, the methodologies and power network models which we use in the next chapters. Firstly, we recall the concepts of the stochastic calculus, output regulation methodology, Model Predictive Control (MPC), and Predictor Corrector Proximal Multiplier (PCPM) algorithm. Then, we introduce the DC and AC network models.
The chapter is organized as follows. The stochastic calculus is presented in Sec-tion 2.1. In SecSec-tion 2.2, the output regulaSec-tion methodology is recalled. In SecSec-tion 2.3, the MPC method is presented. In Section 2.4, the PCPM algorithm is presented. The DC and AC network models are presented and discussed in Sections 2.5 and 2.6, respectively.
2.1
Stochastic Calculus
In this section, we recall for the readers’ convenience the definitions of stochastic differential equation, Ito derivative, (asymptotic) stochastic stability, and stochastic passivity through the Ito calculus framework [112–116].
Definition 2.1. (Stochastic differential equation). We define a Stochastic Differential
Equation (SDE) as
dx(t) = f (x, u)dt + g(x)dw(t), (2.1) where f : RN × RP → RN and g : RN → RN ×M are locally Lipschitz, x : R
is the state vector, u : R≥0 → RP is the input of the system and w : R≥0 → RM is the
standard Brownian motion vector.
Now, according to [116, Theorem 3.1], the conditions for the existence and unique-ness of the solutions to SDEs is presented in the following theorem.
Theorem 2.1. (Existence and uniqueness of the solutions to SDEs). Consider the
following SDE:
dx(t) = f (x, t)dt + g(x, t)dw(t), (2.2) where f : RN × R
≥0 → RN and g : RN × R≥0 → RN ×M, x : R≥0 → RN and
w : R≥0 → RM is the standard Brownian motion vector. Assume that there exist two
positive constants K and ¯Ksuch that
(i) Lipschitz condition: ∀x, y ∈ RN and t ∈ [t 0, T ]
kf (x, t) − f (y, t)k2≤ Kkx − yk2
kg(x, t) − g(y, t)k2≤ Kkx − yk2. (2.3)
(ii) Linear growth condition: ∀x ∈ RN and t ∈ [t 0, T ]
kf (x, t)k2≤ ¯K 1 + kxk2
kg(x, t)k2≤ ¯K 1 + kxk2. (2.4)
Then, there exists a unique solution x(t) to the SDE (2.2). proof. See [116, Theorem 3.1].
In the following, we define the Ito derivative [112–116].
Definition 2.2. (Ito derivative). Consider a storage function S(x), which is twice
conti-nuously differentiable. Then, LS(x) denotes the Ito derivative of S(x) along the SDE (2.1), i.e., LS(x) = ∂S(x) ∂x f (x, u) + 1 2tr{g >(x)∂2S(x) ∂x>∂xg(x)}. (2.5)
Now, we recall the definition of (asymptotic) stochastic stability [112–115].
Definition 2.3. ((Asymptotic) stochastic stability). Assume that the deterministic
and stochastic terms of the SDE (2.1) at the equilibrium point are identical to zero, i.e., f (¯x, u∗) = g(¯x) =0. Then, system (2.1) is (asymptotically) stochastically stable if a twice
continuously differentiable positive definite Lyapunov function S : RN −→ R
>0exists such
2.2. Output Regulation Methodology 17 Now, we introduce the concept of stochastic passivity [113–115].
Definition 2.4. (Stochastic passivity). Consider system (2.1) with output y = h(x).
Assume that the deterministic and stochastic terms of the SDE (2.1) at the equilibrium point are identical to zero, i.e., f (¯x, u∗) = g(¯x) =0. Then, system (2.1) is said to be stochastically
passive with respect to the supply rate u>yif there exists a twice continuously differentiable
positive semi-definite storage function S(x) satisfying
LS(x) ≤ u>y, ∀(x, u) ∈ RN × RP. (2.6) The concepts of the (asymptotic) stochastic stability and stochastic passivity of a stochastic dynamical system defined in the Definitions 2.3 and 2.4, respectively, are similar to the (asymptotic) stability and passivity of a deterministic dynamical systems (see for instance [117, Chapters 4 and 6]). Indeed, a positive definite stor-age (Lyapunov) function is considered for the system, then the derivative of this function is investigated for the stability and passivity analysis. However, the main difference between stochastic and deterministic dynamical systems for the stability and passivity analysis is the derivative of the storage (Lyapunov) function. Indeed, for stochastic systems, we need to use the Ito derivative defined in Definition 2.2. More precisely, the Ito derivative of a storage function along the solution to an SDE is obtained based on the Taylor series expansion and properties of Brownian motion (see for instance [112, Chapter 3] for further details).
Also, note that in order to use Lyapunov stochastic stability analysis and show the stochastic passivity property of a system described by an SDE, we need to assume that the stochastic and deterministic terms of the SDE at the equilibrium point are identical to zero, i.e. f (¯x, u∗) = g(¯x) =0. Otherwise, the stochastic stability of the system cannot be guaranteed and the passivity property of the system cannot be shown. Thus, we need to consider this assumption on the SDE describing the system to use Ito calculus framework for the stochastic stability and passivity analysis of the system.
2.2
Output Regulation Methodology
Let x(t) ∈ Rn
be the system state and d(t) ∈ Rq
be the exosystem state, u(t) ∈ Rmbe
the control input, and e(t) ∈ Rpbe the tracking error, respectively. Then, consider
the following composite system: ˙
x(t) = f x(t), d(t) + g x(t), d(t)u(t) (2.7a) ˙
d(t) = S d(t) (2.7b) e(t) =h x(t), d(t), (2.7c)
where f : Rn × Rq → Rn , g : Rn × Rq → Rn×m , S : Rq → Rq , and h : Rn × Rq → Rp.
Now, following [77, Section 3.2], we define the nonlinear output regulation prob-lem as follows:
Problem 2.1. (Nonlinear output regulation). Let the initial condition x(0), d(0) of system (2.7) be sufficiently close to its equilibrium point x, d. Then, design a state feedback controller
u(t) = k(x(t), d(t)), (2.8) such that the closed-loop system (2.7), (2.8) has the following two properties:
Property 2.1. The trajectories col x(t), d(t) of the closed-loop system exist and are bounded for all t ≥ 0,
Property 2.2. The trajectories col x(t), d(t) of the closed-loop system satisfy limt→∞e(t) = 0p.
If there exists a controller such that the closed-loop system satisfies Properties 2.1 and 2.2, we say that the (local) nonlinear output regulation problem (Problem 2.1) is solvable. Now, in analogy with [77, Assumption 3.10], we introduce the following assumption.
Assumption 2.1. (Stability of exosystem). The equilibrium d of the exosystem(2.7b) is Lyapunov stable and there exists an open neighborhood D of d = d in which every point is Poisson stable [77, Remark 3.2].
We need the above assumption for establishing the necessary condition for the solvability of Problem 2.1 [77, Chapter 3]. Note that although Poisson stability implies Lyapunov stability, for the sake of clarity and similarity to [77, Assumption 3.10], we also mentioned Lyapunov stability in Assumption 2.1. Now, according to [77, Theorem 3.8], the solvability of Problem 2.1 is established in the following theorem.
Theorem 2.2. (Solvability and regulator equation). Let Assumption 2.1 hold and
system (2.7a) be stabilizable. Problem 2.1 is solvable if and only if there exist smooth functions x(d) and u(d) defined for d ∈ D such that
∂x(d)
∂d S(d) = f (x(d), d) + g(x(d), d)u(d) (2.9a) 0p= h(x(d), d). (2.9b)
2.2. Output Regulation Methodology 19 The Partial Differential Equation (PDE) (2.9a) together with (2.9b) is called regula-tor equation. It can be inferred from Theorem 2.2 that the solvability of the regularegula-tor equation (2.9) is equilvalent to the solvability of Problem 2.1.
Now, following [77, Section 4.1], we define the kth-order nonlinear output regula-tion problem as follows:
Problem 2.2. (kth-order nonlinear output regulation). Let the initial condition x(0),
d(0) of system (2.7) be sufficiently close to its equilibrium point x, d. Then, design a state feedback controller (2.8) such that the closed-loop system (2.7), (2.8) has the Property 2.1 and
Property 2.3. The trajectories col x(t), d(t) of the closed-loop system satisfy limt→∞ e(t)−ok(d(t)) = 0p(see Section 1.6 or [77, Definition 4.1] for the definition
of ok(d(t))).
If there exists a controller such that the closed-loop system satisfies Properties 2.1 and 2.3, we say that the (local) kth-order nonlinear output regulation problem (Prob-lem 2.2) is solvable. Indeed, solvability of Prob(Prob-lem 2.2 implies that the steady-state tracking error of the closed-loop system is zero up to kth-order.
Now, let w ∈ Rnwbe the uncertainty vector. Then, system (2.7) with uncertainty
vector w can be rewritten as: ˙ x(t) = f x(t), d(t), w + g x(t), d(t), wu(t) (2.10a) ˙ d(t) = S d(t) (2.10b) e(t) =h x(t), d(t), w, (2.10c) where f : Rn× Rq× Rnw→ Rn, g : Rn× Rq× Rnw→ Rn×m, and h : Rn× Rq× Rnw→ Rp.
Now, following [77, Section 5.1], we define the robust output regulation problem as follows:
Problem 2.3. (Robust output regulation). Design a state feedback controller
u(t) =k(x(t), z(t), d(t)) ˙
z(t) =Q(z(t), e(t)), (2.11) such that the closed-loop system (2.10), (2.11), for all initial condition x(0), z(0), d(0) sufficiently close to the equilibrium point x, z, d and sufficiently small elements of the uncertainty vector w, has the following two properties:
Property 2.4. The trajectories col x(t), z(t), d(t) of the closed-loop system exist and are bounded for all t ≥ 0,
Property 2.5. The trajectories col x(t), z(t), d(t) of the closed-loop system satisfy limt→∞e(t) = 0p.
If there exists a controller such that the closed-loop system satisfies Properties 2.4 and 2.5, we say that the (local) robust nonlinear output regulation problem (Prob-lem 2.3) is solvable.
Moreover, according to [77, Section 7.1], we define the global robust output regulation problem as follows:
Problem 2.4. (Global robust output regulation). Design a state feedback controller
u(t) =k(x(t), z(t), e(t)) ˙
z(t) =Q(x(t), z(t), e(t)), (2.12) such that the closed-loop system (2.10), (2.12), for any compact set D ∈ Rq with a known
bound and any compact set W ∈ Rnwwith a known bound, has the following two properties:
Property 2.6. For all d(0) ∈ D and w ∈ W, the trajectories col x(t), z(t), d(t) of the closed-loop system starting from any initial state x(0) exist and are bounded for all t ≥ 0,
Property 2.7. The trajectories col x(t), z(t), d(t) of the closed-loop system satisfy limt→∞e(t) = 0p.
If there exists a controller such that the closed-loop system satisfies Properties 2.6 and 2.7, we say that the global robust nonlinear output regulation problem (Prob-lem 2.4) is solvable.
2.3
Model Predictive Control
MPC is a control methodology which is an effective means of dealing with large multi-variable constrained control problems. The MPC approach has a prediction model, objective function and control law. In MPC, an optimal control sequence u∗is
calculated by optimizing an objective function over a time horizon while simulating the model of the system [118]. The results of the first time step in the optimal control sequence is applied to the controller and the rest is discarded. Then, the state of the system is updated and the time horizon is shifted one step. This new state is used to calculate a new optimal control sequence and the process is repeated. This approach results in lower computational times and accurate results.
2.3. Model Predictive Control 21 A general formulation of finite-horizon MPC for discrete-time linear systems is as follows [119]: min X X k∈K J (x, u) (2.13a) s.t. x(k0) =x0 (2.13b) x(k + 1) =Ax(k) + Bu(k) (2.13c) x(k + 1) ∈Z, u(k) ∈ U (2.13d) ∀k ∈K, (2.13e) where the feasible sets Z and U are assumed to be convex. The decision variables in X should be chosen over a horizon of time steps K such that the sum of the cost function J(x, u) is minimized. The system state is bounded by the initial state (2.13b), is subject to (2.13c), and the input and state of the system belong to the feasibility sets U and Z, respectively. Typically, a quadratic cost function is considered as
J (x, u) = x(k)>Qx(k) + u(k)>Ru(k), (2.14) where Q and R are the cost matrices of appropriate size. The cost function (2.14) is clearly minimal if the input and state are equal to zero. In order to achieve the desired state ¯xand desired input ¯u, we should modify the cost function (2.14) as
J (x, u) = (x(k) − ¯x)>Q(x(k) − ¯x) + (u(k) − ¯u)>R(u(k) − ¯u). (2.15) Furthermore, the linear equivalent of (2.14) can be given by
J (x, u) = q>x(k) + r>u(k), (2.16) where q and r are cost vectors of appropriate size. The linear cost function (2.16) in problem (2.13) minimizes the corresponding elements while the quadratic cost function (2.15) is minimal if the the elements are equal to their corresponding desired values.
The MPC effectively steers the states of the system towards the optimal value via predictions of future states. However, the MPC requires an accurate model of the system, which can be a large drawback since it can be costly to generate [118]. But if the mathematical model of the system is accurate, the MPC has many advantages thanks to its open methodology and ability to obtain optimal control for the future states of the system. Moreover, the MPC can be easily implemented and improved. Indeed, the optimization model can be easily modified when the system or the accuracy of the system’s mathematical description is changed [120].
In this section, the exposition of MPC is a finite-horizon optimal control problem such that the results of the first time step in the optimal control sequence is applied
to the controller and the rest is discarded, then the state of the system is updated, the time horizon is shifted one step and the procedure is repeated. Indeed, this exposition of MPC improves the robustness of the control scheme. Hence, it is useful for power system applications where there exist different uncertainties (see Chapter 8 for more details on the power system model). Moreover, a stacked MPC formulation can be used to enhance the computational performance [121]. More precisely, the state trajectory of the system can be calculated directly over the prediction horizon instead of calculating the state trajectory iteratively.
2.4
Predictor Corrector Proximal Multiplier
In this section, we describe the PCPM algorithm [122]. This algorithm is an iterative and decomposition method used to solve a convex problem with detachable structure. Consider the following convex problem with detachable structure:
min
u,v f (u) + h(v)
s.t. Cu + Dv = b.
(2.17) Then, we define w as Lagrangian variable and the PCPM algorithm is as follows:
I) Initialize (u0, v0, w0)randomly and i = 0.
II) Set ˜wi= wi+ α(Cu + Dv − b), where α is a positive learning rate.
III) Minimize the following objective functions and find the optimal values of u∗ and v∗. J1(u∗) = min u f (u) + ( ˜w i)TCu + 1 2αku − u ik2 (2.18) J2(v∗) = min v h(v) + ( ˜w i)TDv + 1 2αkv − v i k2 , (2.19) then, set ui+1= u∗and vi+1= v∗.
IV) Set wi+1= wi+ α(Cui+1+ Dvi+1− b) and i = i + 1.
V) If convergence is achieved, then go to step VI; otherwise, go to step II. VI) Stop.
The convergence of the algorithm to its optimal value for small learning rate of α is proved in [122, Theorem 3.1]. Also, in [122, Theorem 4.1] the rate of the convergence is obtained.
2.5. DC Network Model 23
I
giL
gi+
−
u
iV
iG
liI
liP
liV
iC
giI
kR
kL
kDGU i
ZIP i
Line k
Figure 2.1: Electrical scheme of DGU i, ZIP load i and transmission line k, with i ∈ V and k ∈ E.
Table 2.1: Description of symbols for DC network model Gli Unknown conductance load
Ili Unknown current load
Pli Unknown power load
Igi Generated current Vi Load voltage Ik Line current ui Control input Lk Line inductance Rk Line resistance Cgi Shunt capacitor Lgi Filter inductance
2.5
DC Network Model
In this section, we introduce the model of the considered DC network comprising of Distributed Generation Units (DGUs), loads and transmission lines [12–14]. Figure 2.1 illustrates the structure of the considered DC microgrid and Table 2.1 reports the description of the used symbols. Let G = (V, E) be a connected and undirected graph that describes the considered DC network. The nodes and the edges are denoted by
V = {1, ..., n} and E = {1, ..., m}, respectively. Then, the topology of the microgrid is represented by the corresponding incidence matrix A ∈ Rn×m. Let the ends of each
transmission line k be arbitrarily labeled with a ‘-’, or a ‘+’, then A can be defined as Aik=
+1 if i is the positive end of k −1 if i is the negative end of k 0 otherwise.
(2.20) Before presenting the dynamics of the overall network, we first introduce and discuss the models of each component of the network.
DGU model:Each DGU represents a DC-DC buck converter (including an output RLC low-pass filter) supplying a load. Let Igi : R≥0 → R be the generated current
and Vi : R≥0 → R be the load voltage at node i ∈ V. Then, the dynamic model of
DGU i ∈ V is described by LgiI˙gi = −Vi+ ui CgiV˙i = Igi− Ili(Vi) − X k∈Ei Ik, (2.21)
where Ik, ui: R≥0 → R, Lgi, Cgi ∈ R>0and Eiis the set of the lines incident to node i.
Moreover, Ili: R → R represents the current demand of load i possibly depending on
the voltage Vi. We should notice that uiin (2.21) can be expressed as δiVDCi, where
VDCiis the constant DC voltage provided by an energy source at node i and δiis the
duty cycle of the converter i [14].
Load model:In this thesis, we consider a general nonlinear load model including the parallel combination of the following unknown load components1:
1. impedance component (Z): Gli,
2. current component (I): Ili,
3. power component (P): Pli.
To refer to the load types above, the letters Z, I and P, respectively, are often used in the literature (see for instance [15]). Therefore, in presence of ZIP loads, Ili(Vi)in
(2.21) is given by
Ili(Vi) = GliVi+ Ili+ Vi−1Pli. (2.22) Line model:The dynamics of the current Ikexchanged between nodes i and j
are described by
LkI˙k = (Vi− Vj) − RkIk, (2.23)
1Note that for the sake of notation simplicity we prefer to use the load conductance G
liinstead of the
2.6. AC Network Model 25 Table 2.2: Description of symbols for AC network model
Pci Conventional power generation
Pdi Uncontrolled power injection
ϕi Voltage angle
ωi Frequency deviation
Vi Voltage
τpi Moment of inertia
τvi Direct axis transient open-circuit constant
Xdi Direct synchronous reactance
Xdi0 Direct synchronous transient reactance ψi Damping constant
B Susceptance ¯
Ef i Constant exciter voltage
Ni Neighboring areas of area i
τci Turbine time constant
ξi Speed regulation coefficient
A Incidence matrix of power network
Lcom Laplacian matrix of communication network
ui Control input
where Rk, Lk∈ R>0.
Now, by using the Kirchhoff’s current (KCL) and voltage (KVL) laws, the dynam-ics of the overall network can be written compactly as
LgI˙g= −V + u
CgV = I˙ g+ AI − [Gl]V − Il− [V ]−1Pl
L ˙I = −A>V − RI,
(2.24) where Ig, V, u : R≥0 → Rn, I : R≥0 → Rm, and Gl, Il, Pl ∈ Rn. Moreover,
Lg, Cg ∈ Rn×n>0 and R, L ∈ R m×m
>0 are positive definite diagonal matrices, e.g, Lg=
diag Lg1, . . . , Lgn.
2.6
AC Network Model
In this section, we discuss the model of the considered power network (see Table 2.2 for the description of the symbols and parameters). The network topology is repre-sented by an undirected and connected graph G = (V, E), where V = {1, 2, ..., n} is the set of the control areas and E = {1, 2, ..., m} is the set of the transmission lines.
Then, let A ∈ Rn×mdenote the corresponding incidence matrix and let the ends of
the transmission line j be arbitrarily labeled with a ‘+’ and a ‘−’. Then, we have Aij =
+1 if i is the positive end of j −1 if i is the negative end of j
0 otherwise.
(2.25) Moreover, in analogy with [41, 56], we assume that the power network is lossless and each node represents an aggregated area of generators and loads whose dynamics are described by the so called flux-decay or single-axis model. Indeed, it includes a differential equation describing voltage dynamics in the classical second order swing equations that describes the dynamics for the frequency deviation ω and the voltage angle ϕ [41]. Then, the dynamics of node (area) i ∈ V are the following (see for instance [18, 21, 41, 56] for further details):
˙ ϕi= ωi τpiω˙i= − ψiωi+ Pci+ Pdi+ X j∈Ni ViVjBijsin(ϕi− ϕj) τviV˙i= ¯Ef i− (1 − χdiBii)Vi + χdi X j∈Ni VjBijcos(ϕi− ϕj), (2.26) where ϕi, ωi, Vi, Pdi : R≥0 → R, τpi, τvi, ψi, ¯Ef i, Bii, Bij ∈ R, χdi := Xdi− Xdi0 , with
Xdi, Xdi0 ∈ R, and Pci : R≥0 → R is the power generated by the i-th (equivalent)
synchronous generator and can be expressed as the output of a first-order dynamical system describing the behavior of the turbine-governor, i.e.,
τciP˙ci= −Pci− ξi−1ωi+ ui, (2.27)
where ui: R≥0→ R is the control input and τci, ξi∈ R.
Now, we can write systems (2.26) and (2.27) compactly for all nodes i ∈ V as ˙ θ = A>ω τpω = −ψω + P˙ c+ Pd− AΥ(V ) sin(θ) τvV = −χ˙ dE(θ)V + ¯Ef τcP˙c= −Pc− ξ−1ω + u, (2.28)
where ω, V, Pc, Pd, u : R≥0 → Rn, θ : R≥0 → Rmdenotes the vector of the voltage
angles differences, i.e., θ := A>ϕ, τp, τv, ψ, χd, τc, ξ ∈ Rn×n, and ¯Ef ∈ Rn. Moreover,
Υ : Rn → Rm×mis defined as Υ(V ) := diag{Υ
2.7. Concluding Remarks 27 where k ∼ {i, j} denotes the line connecting areas i and j. Furthermore, for any i, j ∈ V, the components of E : Rm
→ Rn×nare defined as follows:
Eii(θ) = 1 χdi − Bii, i ∈ V Eij(θ) = − Bijcos(θk) = Eji(θ), k ∼ {i, j} ∈ E Eij(θ) = 0, otherwise. (2.29)
Note that since the control input u(t) in the rest of the thesis is not a stochastic signal, we do not need to put any assumption on the average power, or boundedness of the second moment of a random or stochastic variable.
We also note that the power system dynamics and the networks of synchronous machines can be represented as port-Hamiltonian systems admitting a shifted passiv-ity property [123, 124]. Then, the port-Hamiltonian framework can be used to design the control schemes guaranteeing the stability of the overall network. However, since the loads are considered time-varying, we use the output regulation methodology for the controller design.
Remark 2.1. (Susceptance and reactance). According to [21, Remark 1], we notice
that the reactance Xdi of each generator i ∈ V is in practice generally larger than the
corresponding transient reactance X0
di. Furthermore, the self-susceptance Bii is negative
and satisfies |Bii| > Pj∈Ni|Bij|. Therefore, E(θ) is a strictly diagonally dominant and
symmetric matrix with positive elements on its diagonal, implying that E(θ) is positive definite [41].
2.7
Concluding Remarks
In this chapter, we have recalled some preliminaries including the stochastic calculus, output regulation methodology, MPC method, PCPM algorithm, DC and AC network models which is relevant to the rest of this thesis.