• No results found

Computing supremal minimum-weight controllable and normal sublanguages

N/A
N/A
Protected

Academic year: 2021

Share "Computing supremal minimum-weight controllable and normal sublanguages"

Copied!
27
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Computing supremal minimum-weight controllable and normal

sublanguages

Citation for published version (APA):

Su, R., Schuppen, van, J. H., Rooda, J. E., & Petreczky, M. (2009). Computing supremal minimum-weight controllable and normal sublanguages. (SE report; Vol. 2009-03). Technische Universiteit Eindhoven.

Document status and date: Published: 01/01/2009

Document Version:

Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers)

Please check the document version of this publication:

• A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website.

• The final author version and the galley proof are versions of the publication after peer review.

• The final published version features the final layout of the paper including the volume, issue and page numbers.

Link to publication

General rights

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain

• You may freely distribute the URL identifying the publication in the public portal.

If the publication is distributed under the terms of Article 25fa of the Dutch Copyright Act, indicated by the “Taverne” license above, please follow below link for the End User Agreement:

www.tue.nl/taverne Take down policy

If you believe that this document breaches copyright please contact us at: openaccess@tue.nl

(2)

Systems Engineering Group

Department of Mechanical Engineering Eindhoven University of Technology PO Box 513 5600 MB Eindhoven The Netherlands http://se.wtb.tue.nl/

SE Report: Nr. 2009-03

Computing Supremal

Minimum-Weight Controllable and

Normal Sublanguages

Rong Su, Jan H. van Schuppen,

Jacobus E. Rooda and Mih´

aly Petreczky

ISSN: 1872-1567

SE Report: Nr. 2009-03 Eindhoven, April 2009

(3)
(4)

Abstract

In practical applications we are frequently required to find a supervisor that can achieve certain optimal performance. Some performance such as maximum throughput or min-imum execution time/cost can be specified in terms of weights. In this paper we first define a minimum-weight supervisory control problem on weighted discrete-event systems. Then we show that, the supremal minimum-weight controllable and normal sublanguages exist, and can be computed by a terminable algorithm.

(5)

1 Introduction

In many practical applications we are frequently required to develop a controller that can not only enforce safety and liveliness related specifications, but also achieve certain optimal performance. For example, in semiconductor industry cluster tools are used to process wafers. A typical requirement is that, the system should not deadlock or livelock, and should achieve as high a throughput as possible [33]. A similar requirement appears in highway traffic control. At a certain abstraction level, such a system can be modeled as a weighted automata, which carries weights on its edges. Each weight can represent execution time or cost. The problem is to synthesize a supervisor which guarantees that, the closed-loop behavior complies with specifications and is controllable and observable[2] [3], and furthermore, the supervisor can drive the system towards a desirable state with minimum time/cost in terms of an appropriate sum of weights.

In this paper, controllability, normality and supremality are adopted. We first define a weight over each nonblocking controllable and normal sublanguage as the weight of the longest string in this sublanguage. Then we present two supervisor synthesis problems as to find the supremal (i.e. largest in terms of set inclusion) nonblocking controllable sub-language and controllable and normal subsub-language respectively that have the minimum finite weight. After that we provide for each problem a terminable algorithm to compute the relevant supremal minimum-weight sublanguage. Our contributions in this paper can be described in two aspects: first, we propose a weighted supervisory control problem un-der partial observation; second, we present an efficient computational algorithm to solve the problem.

Synthesis problems involving quantitative costs have been discussed in the literature, es-pecially in optimal supervisory control, e.g. [9] [7] [8] [10] [11] [12]. These approaches are aimed to find a supervisor that can drive a deterministic plant from the initial state to a target state set with the minimum cost. Although their general goals are the same as ours, they are different from ours in various aspects besides their deterministic setup. In [9] weights are assigned to transitions, but all events are assumed to be controllable, and the supervisor need not be the least restrictive. Furthermore, partial observation is not considered. In [7] their setup is very close to ours, except that it is for deterministic systems and no partial observation is under consideration. In [8] the weights are not assigned to transitions, instead, they are considered in terms of cost of event disabling, reaching undesirable states and not being able to reach desirable states. Thus, their optimal control problem is different from ours. Besides, they adopt the state-feedback control instead of even-based control used in this paper. In [10] weights are assigned to events and states, not on transitions. The cost function is defined as the sum of event weights and event disabling costs at certain states. If event disabling costs are assumed to be zero, then their problem is the same as our first problem under full observation. But their computational procedure is completely different from ours. Furthermore, par-tial observation is not considered. [11] is an extension of [10], where parpar-tial observation is taken into consideration. But their approach is to first project out all unobservable events from the plant model, then solve the optimal control problem in the observable abstraction of the original plant model by using the technique proposed in [10]. The resultant supervisor is least restrictive with respect to the observable abstraction, but not with respect to the original model. This differs from our goal of achieving the least restrictive supervisor with respect to the original plant model. In [12] the weights are assigned to transitions and event disabling. But they define the weight of a (controllable) sublanguage as the signed real measure, which is roughly the sum of all string weights, instead of the maximum string weight used in our paper, making their optimal control

(6)

problem different from ours.

Synthesis problems involving quantitative costs have also been discussed in supervisory control of timed automata [4] [5] [6] [25] [31]. complies with the specification and can reach a certain desirable state (e.g. a marker state) with a cost no more than some pre-specified value, even when uncontrollable events happen. Among these approaches, [6] is an extension of [4] and [5]. [25] deals with a special control strategy, where a winning path must follow a special pattern of alternating controllable and uncontrollable events. [31] deals with acyclic timed automata. In [4, 5, 6, 25, 31] a game theoretic approach is used, where uncontrollable events play the role of the opponent, which tries to maximize the cost. The controllable events then correspond to the player who tries to minimize the cost. A solution of the control problem correponds then to a strategy of the player correspond-ing to controllable events. The ariscorrespond-ing game-theoretic problem is a two-person zero-sum dynamic games. it was noted above, controllable actions as the actions of the first player, and by modeling similar the literature, see [14, 21]. games on graphs and finite-state Markov-chains have been studied before, see [23, 20, 13, 15, 16, 24, 18, 19]. However, none of the cited results seem to be directly applicable to the problem considered in this paper. The problem in [13, 24, 19] is similar to th problem of this paper, however, in [13, 24, 19] only the cost of the terminal state, but not the path leading to it is considered. In [23, 20, 22] stochastic finite-state games are considered. In [18] finite-state dynamic games with mean payoff are considered. However, the payoff function in our setting is not mean-payoff. The class of repeated games with overtaking payoff [21] seems to be the closest to our setting. However, we did not find any solution algorithm for the infinite horizon case in the literature. In [17], combined parity and mean-payoff games were con-sidered. control problem in the The qualitative costs considered here are not mean-payoff.

Besides the difference of supervisory control problem formulations including that we deal with systems with partial observations, which are not considered in the existent ap-proaches, pursuing supremal control strategies makes our approach different from most game-theory-based approaches. Although in [6] the authors also aim at the supremal strategy and their computational algorithm is close to ours under an appropriate for-mulation transformation, no specific upper bound for the termination of the algorithm is given in [6] because of the involvement of time, and the algorithm does not take the conformation of specifications into consideration, which, as a contrast, is dealt in our algorithm. Weighted automata also appear in supervisory control or fault diagnosis of probabilistic/stochastic discrete-event systems, e.g. [26] [28] [32], speech recognition, e.g. [30] [29], and verification, e.g. [27]. But their target problems are completely different from ours.

This report is organized as follows. In Section II we first provide all relevant necessary concepts about languages and automata, then introduce a minimum-weight supervisory control problem. After that we present a terminable algorithm in Section III, which com-putes the supremal minimum-weight controllable sublanguages. In Section IV we discuss how to handle partial observation by introducing the concept of supremal minimum-weight controllable and normal sublanguages, and providing an algorithm to compute them. Conclusions are drawn in Section V. All long proofs are provided in the Appendix.

(7)

2 Minimum-Weight Supervisory Control Problems

In this section we first review basic concepts of languages and weighted finite-state au-tomata. Then we present two minimum-weight supervisory control problems that take respectively full and partial observation into consideration.

Let Σ be a finite alphabet, and Σ∗ denote the Kleene closure of Σ, i.e. the collection of all finite sequences of events taken from Σ. Given two strings s, t ∈ Σ∗, s is called a prefix substring of t, written as s ≤ t, if there exists s′ ∈ Σsuch that ss= t, where ss′ denotes the concatenation of s and s. We use ǫ to denote the empty string of Σ∗ such that for any string s ∈ Σ∗, ǫs = sǫ = s. We use |s| to denote the length of s, e.g. suppose s = aab then |s| = 3. In particular, |ǫ| = 0. A subset L ⊆ Σ∗is called a language. L = {s ∈ Σ∗|(∃t ∈ L) s ≤ t} ⊆ Σ∗ is called the prefix closure of L. L is called prefix closed if L = L. Given two languages L, L′ ⊆ Σ, LL:= {ss∈ Σ|s ∈ L ∧ s∈ L}. When L is a singleton, say L = {s}, then we simply use sL′ to denote {s}L′.

A weighted finite-state automaton is a pair (G = (X, Σ, ξ, x0, Xm), f ), where G denotes a deterministic finite-state automaton with X for the state set, Σ for the alphabet, ξ : X × Σ → X for the (partial) transition function, x0 for the initial state, and Xm for the marker state set, and f : X × Σ → R+ is the weight function, where R+ denotes the set of positive reals. We use ξ(x, σ)! to denote that, the transition ξ(x, σ) is defined, and ¬ ξ(x, σ)! for ξ(x, σ) not being defined. As usual, we extend the domain of ξ from

X × Σ to X × Σ∗. Let L(G) := {s ∈ Σ|ξ(x

0, s)!} be the closed behavior of G and Lm(G) := {s ∈ L(G)|ξ(x0, s) ∈ Xm} for the marked behavior of G. Given a language K ⊆ Σ∗, let K := {s ∈ Σ|(∃s∈ K) s ≤ s} be the prefix closure of K. We use φ(Σ) to denote the collection of all finite-state automata, whose alphabet is Σ.

To associate a weight to each sublanguage, we first define a weight for each string. To this end, let θG : X × Σ∗→ R+ be a map, where

1. θG(x, ǫ) = 0 2. (∀sσ ∈ Σ∗) θ G(x, sσ) :=  θG(x, s) + f (ξ(x, s), σ) if ξ(x, s)! and ξ(x, sσ)! undefined otherwise

In other words, the weight of a string is simply the sum of weights of transitions appearing in this string. For each sublanguage K ⊆ L(G), the weight of K with respect to G is defined as follows:

ωG(K) := 

maxs∈KθG(x0, s) if K 6= ∅ and K is finite

+∞ otherwise

The motivation of assigning an infinite weight to the empty set can be explained as fol-lows. Usually a set of strings is associated with a property, e.g. K ⊆ Lm(G) is a collection of strings that can reach marker states. When a set is empty, it means there is no fi-nite string satisfying that property. Thus, the weight of the empty set should not be fifi-nite.

To present our first supervisory control problem, we need the concept of controllability. Let Σ = Σc∪ Σuc, where disjoint subsets Σc and Σuc denote respectively the set of con-trollable events and the set of unconcon-trollable events.

(8)

Definition 2.1. [34] Given G ∈ φ(Σ), a language K ⊆ L(G) is controllable with respect

to G if KΣuc∩ L(G) ⊆ K. 

Given another automaton E ∈ φ(Σ), let

WC(G, E) := {K ⊆ Lm(G) ∩ Lm(E)|K is controllable w.r.t. G ∧ ωG(K) < ∞} be the collection of controllable sublanguages K ⊆ Lm(G) ∩ Lm(E), whose weights are finite. It is possible that, WC(G, E) = ∅. Because

min

(x,σ)∈X×Σ:ξ(x,σ)!f (x, σ) > 0

for any K ∈ WC(G, E) we can derive that the set {K′∈ WC(G, E)|ω

G(K′) ≤ ωG(K)} is finite. Thus, there exists K∗∈ WC(G, E) such that

(∀K ∈ WC(G, E)) ωG(K∗) ≤ ωG(K)

Since an arbitrary union of controllable sublanguages is still controllable, we have that ∪K∈WC(G,E):ωG(K)=ωG(K∗)K ∈ WC(G, E)

which is called the supremal minimum-weight controllable sublanguage of G with respect to E, and denote it as supWC(G, E). We now introduce one more concept before we define a supervisor synthesis problem.

Definition 2.2. G = (X, Σ, ξ, x0, Xm) is marking deadlock if (∀x ∈ Xm)(∀σ ∈ Σ) ¬ ξ(x, σ)!. 

A marking deadlock automaton is the one, whose marker states are deadlock states. The reason that we are interested in marking deadlock automata is because we want to de-sign a controller such that, the first time of reaching a marker state in the closed-loop system should be as early as possible. As long as a marker state is reached, whether the plant continues or stops is not of our interest. In reality, after a marker state is reached, the system can be reset to repeat the same sequence again, as commonly seen in manufacturing systems. We now state the first problem that we will solve in this paper: Problem 2.3. Given a plant G ∈ φ(Σ), which is marking deadlock, and a specification

E ∈ φ(Σ), how to compute supWC(G, E)? 

In Problem 1 we consider every event to be observable. This may not be true in reality. To deal with partial observation, we first introduce the concept of normality. Let G ∈ φ(Σ) and Σ = Σo∪ Σuo, where the disjoint subsets Σo and Σuo denote respectively the set of observable events and the set of unobservable events. Let Σ′ ⊆ Σ. A mapping P : Σ∗→ Σ′∗is called the natural projection with respect to (Σ, Σ), if

1. P (ǫ) = ǫ 2. (∀σ ∈ Σ) P (σ) :=  σ if σ ∈ Σ′ ǫ otherwise 3. (∀sσ ∈ Σ∗) P (sσ) = P (s)P (σ)

(9)

Given a language L ⊆ Σ∗, P (L) := {P (s) ∈ Σ′∗|s ∈ L}. The inverse image mapping of P is

P−1: 2Σ′∗

→ 2Σ∗

: L 7→ P−1(L) := {s ∈ Σ|P (s) ∈ L} Let Po: Σ∗→ Σ∗obe the natural projection. We have the following definition.

Definition 2.4. Let K ⊆ L(G). We say K is normal with respect to G and Po if

K = L(G) ∩ P−1

o (Po(K)). 

This definition is different from the one defined in [34], where normality is a property on the language itself, not on its prefix closure as used in Def. 2.4. We hope this slight abuse of notation will not cause any confusion for readers. Given another automaton E ∈ φ(Σ), let

WCN (G, E) := {K ∈ WC(G, E)|K is normal with respect to G and Po}

For notational simplicity, from now on we say a sublanguage K ⊆ L(G) is controllable and normal with respect to G, if K is controllable with respect to G and Σuc, and normal with respect to G and Po. Thus, WCN (G, E) is the collection of all controllable and normal sublanguages of Lm(G) ∩ Lm(E) with respect to G, whose weights are finite. Since an arbitrary union of controllable and normal sublanguages is still controllable and normal, by using a similar argument as showing the existence of supremal minimum-weight controlable sublanguage WC(G, E), we can derive, there exists a unique element

K∗∈ WCN (G, E) such that,

(∀K ∈ WCN (G, E)) ωG(K∗) ≤ ωG(K) ∧ [ωG(K) = ω(K∗) ⇒ K ⊆ K∗]

We call K∗the supremal minimum-weight controllable and normal sublanguage of G with respect to E, and denote it as supWCN (G, E). Our second problem is specified as follows: Problem 2.5. Given a plant G ∈ φ(Σ), which is marking deadlock, and a specification

E ∈ φ(Σ), how to compute supWCN (G, E)? 

Next, we provide terminable procedures to compute supWC(G, E) and supWCN (G, E), and show their correctness.

3 Computing Supremal Minimum-Weight

Control-lable Sublanguages

The basic idea is that, we first compute the supremal controllable sublanguage of G under E, then we search for the supremal minimum-weight controllable sublanguage (SMWCS) of G from the previously computed supremal controllable sublanguage. By doing this we can make sure that, the computed controllable sublanguage complies with the specifica-tion Lm(E). Searching for the SMWCS can be done in a way similar to searching for the optimal winning strategy in timed game theory [6], or the Dijkstra’s algorithm for solving the single-source shortest-paths problem [1], except that the Dijkstra’s algorithm does not distinguish controllable and uncontrollable edges, and updates the cost of a state based on costs of its ancestor states instead of costs of its descendent states. Since the algorithm may encounter infinitely large weights, we treat +∞ as a number and make

(10)

the following rule: for any a ∈ R+, (+∞) + a = +∞.

Procedure for Supremal Minimum-Weight Controllable Sublanguages(PSMWCS):

1. Input: a marking deadlock G = (X, Σ, ξ, x0, Xm, f ) and a specification E ∈ φ(Σ) 2. Initialization:

(a) Compute the supremal controllable sublanguage K ⊆ Lm(G) ∩ Lm(E) of G. (b) If K = ∅ then ˆK := ∅ and go to Step (5).

(c) Construct a weighted automaton (S = (Y, Σ, η, y0, Ym), f′) such that Lm(S) = K, L(S) = Lm(S) and

(∀s, s′ ∈ Σ∗) η(y0, s) = η(y0, s) ⇒ ξ(x0, s) = ξ(x0, s′) and the weight function f′: Y × Σ → R+ is defined as follows:

(∀s ∈ L(S))(∀σ ∈ Σ) f′(η(y0, s), σ) := f (ξ(x0, s), σ) (d) For each y ∈ Ym, set κ0(y) = 0

(e) For each y ∈ Y − Ym, define κ0(y) := +∞ 3. Iterate on k = 1, 2, · · · , as follows:

(a) For each y ∈ Ym, κk(y) := 0 (b) For each y ∈ Y − Ymwe have

κk(y) :=    maxσ∈Σuc∧η(y,σ)!(f ′(y, σ) + κ k−1(η(y, σ))) if (∃σ ∈ Σuc) η(y, σ)! minσ′∈Σc∧η(y,σ′)!(f ′(y, σ) + κ k−1(η(y, σ′))) if ∅ 6= {σ ∈ Σ|η(y, σ)!} ⊆ Σc κk−1(y) otherwise

(c) Termination when: (∃r ∈ N)(∀y ∈ Y ) κr−1(y) = κr(y)

4. If κr(y0) = +∞, then ˆK := ∅ and go to step (5). Otherwise, let S′ = (Y′, Σ, η′, y0, Ym′) where

(a) Y′ := {y ∈ Y |κ

r(y) < +∞} (b) Y′

m:= Y′∩ Ym

(c) η′ : Y× Σ → Y, where for any (y, σ) ∈ Y× Σ, η′(y, σ) :=



η(y, σ) if η(y, σ) ∈ Y′ and f(y, σ) + κ

r(η(y, σ)) ≤ κr(y) not defined otherwise

and let ˆK := Lm(S′).

5. Output: ˆK 

The definition of S guarantees that the weight function f′ is well defined. As long as K 6= ∅, we can always find such an automaton S. For example, we can first find an arbitrary recognizer of K, say ˆS with L( ˆS) = Lm( ˆS). Then let S := G × ˆS, which, we can check, satisfies the definition. What PSMWCS does is that, first the supremal con-trollable sublanguage K ⊆ Lm(G)∩Lm(E) with respect to G is computed. If K = ∅ then we know that WC(G, E) = ∅. When K 6= ∅, we start to search the largest sublanguage of K, which is controllable with respect to G with the minimum finite weight. The search is based on a recognizer S of K. At each stage k, the weight κk(y) at each state y of S

(11)

actually is equal to the minimum worst-case accumulated weight from y to a marker state reachable from y within no more than k transitions. The updating rule says that: at each stage k, the weight of each marker state is always zero; the weight of a non-marker state is updated based on weights of its descendants in the previous stage k − 1. When the termination condition holds at k, we first check whether κk(y0) is finite. If κk(y0) = +∞, then it means it is not possible to drive the plant from the initial state y0 to a marker state within a finite number of transitions by simply disabling controllable events. If κk(y0) < +∞, then we construct S′, whose transition maps indicates that, only transi-tions that are part of paths, whose weights are no more than the minimum worst-case weight of the initial state are allowed. We will show that, Lm(S′) = supWC(G, E). To this end, we first show that, the procedure PSMWCS terminates no later than k = |Y |, where |Y | denotes the size of Y . To this end, we need to introduce a few more concepts and lemmas.

Definition 3.1. Suppose S is computed in PSMWCS. For any two strings s, s′∈ Σand y ∈ Y , we say s′ is an uncontrollable bypath of s with respect to y, denoted as s ≻

ys′, if η(y, s) ∈ Ym and η(y, s′) ∈ Ymand the following condition holds,

(∃s1, s2, s3∈ Σ∗)(∃σ ∈ Σuc) s = s1s2 ∧ s′= s1σs3



What Def. 3.1 says is that, an uncontrollable bypath s′ of swith respect to y shares a substring s1 with s, and departs from s by an uncontrollable event σ, and both strings reach a marker state from state y. Next, we introduce the concepts of chains and maximal chains.

Definition 3.2. Suppose S is computed in PSMWCS. For each y ∈ Y and s ∈ Σ∗, a set c(y, s) ⊆ {s′ ∈ Σ|s ≻

ys′} ∪ {s} with s ∈ c(y, s) is a chain of s with respect to y if (∀t ∈ c(y, s)) |{σ ∈ Σ|tσ ∈ c(y, s)} ∩ Σc| ≤ 1

We say c(y, s) is maximal if

(∀t ∈ c(y, s))(∀σ ∈ Σuc) η(y, tσ)! ⇒ tσ ∈ c(y, s)



From Def. 3.2 we get that, any prefix substring of a string in a chain can have at most one controllable extension within the chain. If the chain is maximal, then any uncontrollable extension of a prefix substring that is allowed in S should still be a prefix substring of that chain. This property actually suggests that, each maximal chain is controllable with respect to S (thus, with respect to G as well because S is controllable with respect to G). Let γ(y, s) be the collection of all maximal chains of s with respect to y. For each k ∈ N we define

̺(y, k) := {C ∈ γ(y, s)|η(y, s) ∈ Ym ∧ sup s′∈C

|s′| ≤ k}

Thus, ̺(y, k) is the collection of all maximal chains whose length is no more than k. Let υk(y) =



minC∈̺(y,k)maxs′∈CθS(y, s′) if ̺(y, k) 6= ∅

+∞ otherwise

Informally speaking, υk(y) denotes the minimum worst-case weight from y to a marker state within k transitions, where ’worst-case’ means that uncontrollable transitions may pull the transition away from the path with the absolute minimum weight. We have the

(12)

following lemmas.

Lemma 3.3. Suppose S is computed by PSMWCS. For any y ∈ Y and k ∈ N, we have

κk(y) = υk(y). 

The proof of Lemma 3.3 is provided in the Appendix. It relates the weight of a state in PSMWCS to the weight of a maximal chain, and the number of iterations k to the length of the longest string in that maximal chain.

Lemma 3.4. Suppose S is computed by PSMWCS. For any y ∈ Y and C ∈ ̺(y, |Y |),

there exists ˆC ∈ ̺(y, |Y | − 1) such that maxˆu∈ ˆCθS(y, ˆu) ≤ maxu∈CθS(y, u). 

The proof of Lemma 3.4 is presented in the Appendix. We now use Lemma 3.3 and Lemma 3.4 to show the following theorem.

Theorem 3.5. In PSMWCS, for any y ∈ Y we have κ|Y |−1(y) = κ|Y |(y). 

Proof: By Lemma 3.3 we have κ|Y |(y) = υ|Y |(y). Since ̺(y, |Y | − 1) ⊆ ̺(y, |Y |) and by Lemma 3.4 we have

υ|Y |(y) = 

minC∈̺(y,|Y |)maxu∈CθS(y, u) if ̺(y, |Y |) 6= ∅

+∞ otherwise

=

 min

ˆ

C∈̺(y,|Y |−1)maxu∈ ˆˆ CθS(y, ˆu) if ̺(y, |Y | − 1) 6= ∅

+∞ otherwise

= υ|Y |−1(y)

By Lemma 3.3, we have κ|Y |−1(y) = κ|Y |(y), and the theorem follows. 

Theorem 3.5 indicates that, the termination condition in PSMWCS will be satisfied no later than k = |Y |. Next, we need to show that, the output of PSMWCS is indeed what we want. To this end we first need the following lemma.

Lemma 3.6. For any s ∈ Lm(S), every maximal chain C ∈ γ(y0, s) is controllable with respect to G; and every nonempty controllable sublanguage K ⊆ Lm(S) with respect to

G contains one maximal chain C ∈ γ(y0, s) for some s ∈ K. 

The proof of Lemma 3.6 is provided in the Appendix. Lemma 3.6 is used in the proof of the following theorem.

Theorem 3.7. Given a weighted finite-state automaton (G, f ) with a marking deadlock

G ∈ φ(Σ) and a specification E ∈ φ(Σ), let ˆK be computed by PSMWCS. Then (1)

ˆ

K = ∅ if and only if WC(G, E) = ∅; (2) When ˆK 6= ∅, we have ˆK = supWC(G, E). 

(13)

x0 x1 x2 x3 x4 a/1 b/2 a/2 d/6 c/1 a/2 d/1

Figure 1: Example 1: Plant Model G

The proof of Theorem 3.7 is presented in the Appendix. Next, we use a simple example to illustrate the procedure. Suppose the plant model G is depicted in Figure 1, where Σ = {a, b, c, d} and Σc= {a, b}. The initial state is x0 and the marker states are x3 and x4. The cost function f is as follows:

f (x0, a) = 1, f (x1, a) = 2, f (x1, b) = 2, f (x1, d) = 6, f (x2, a) = 2, f (x2, c) = 1, f (x2, d) = 1 For convenience we directly show the cost values on relevant edge in Figure 1. For example, the edge a/1 denotes that, the event is a and the corresponding cost value of this transition is 1. The specification E is depicted in Figure 2. We can easily check

q0 a q1

b, c, d b, d

Figure 2: Example 1: Specification Model E

that, Lm(G) ∩ Lm(E) is controllable with respect to G. Let S be an automaton depicted in Figure 3, which recognizes Lm(G) ∩ Lm(E). The cost function f′ for S is defined as

y0 y1 y2 y3 y4 a/1 b/2 d/6 c/1 d/1

(14)

follows:

f′(y0, a) = 1, f′(y1, b) = 2, f′(y1, d) = 6, f′(y2, c) = 1, f′(y2, d) = 1 We now apply PSMWCS on S, and the computational results are listed as follows.

1. k = 0: κ0(y3) = κ0(y4) = 0, κ0(y0) = +∞, κ0(y1) = +∞, κ0(y2) = +∞ 2. k = 1: κ1(y3) = κ1(y4) = 0, κ1(y0) = +∞, κ1(y1) = 6, κ1(y2) = +∞ 3. k = 2: κ2(y3) = κ2(y4) = 0, κ2(y0) = 7, κ2(y1) = 6, κ2(y2) = +∞ 4. k = 3: κ3(y3) = κ3(y4) = 0, κ3(y0) = 7, κ3(y1) = 6, κ3(y2) = 8 5. k = 4: κ4(y3) = κ4(y4) = 0, κ4(y0) = 7, κ4(y1) = 6, κ4(y2) = 8

Since at k = 4, for any y ∈ Y we have κ3(y) = κ4(y). The termination condition holds. Since κ4(y0) = 7 < ∞, we construct S′ as follows:

1. Y′= {y

0, y1, y2, y3, y4} 2. Y′

m= Ym∩ Y′= {y3, y4}

3. The transition η′is almost the same as η, except that η(y

1, b) is not defined because, although η(y1, b) ∈ Y′, the condition κ4(y1) ≥ f′(y1, b) + κ4(η(y1, b)) does not hold. The final S′ is depicted in Figure 4, where states y

2 and y3 are unreachable from y0. Thus, ˆK = Lm(S′) = {ad}. We can easily verify that, the supremal minimum-weight

y0 y1 y2 y3 y4 a/1 d/6 c/1 d/1

Figure 4: Example 1: Automaton Model S′

controllable sublanguage supWC(G, E) = {ad}. Thus, ˆK = supWC(G, E), as predicted

by Theorem 3.7.

Next, we describe how to solve Problem 2, namely to compute the supremal minimum-weight controllable and normal sublanguage supWCN (G, E).

(15)

4 Computing Supremal Minimum-Weight

Control-lable and Normal Sublanguages

We first present a terminable algorithm below and provide an intuitive explanation for it. Then we show that the procedure fulfils our expectation for solving Problem 2.

Procedure for Supremal Minimum-Weight Controllable and Normal

Sublan-guages(PSMWCNS):

1. Input: a marking deadlock G = (X, Σ, ξ, x0, Xm, f ) and a specification E ∈ φ(Σ) 2. Initialization:

(a) Compute the supremal controllable and normal sublanguage K ⊆ Lm(G) ∩

Lm(E) with respect to G.

(b) If K = ∅ then set KCN = ∅ and go to step (6).

(c) Construct a weighted automaton (S = (Y, Σ, η, y0, Ym), f′) such that Lm(S) = K, L(S) = Lm(S) and

(∀s, s′ ∈ Σ∗) η(y

0, s) = η(y0, s′) ⇒ ξ(x0, s) = ξ(x0, s′) and the weight function f′: Y × Σ → R+ is defined as follows:

(∀s ∈ L(S))(∀σ ∈ Σ) f′(η(y0, s), σ) := f (ξ(x0, s), σ) (d) For each y ∈ Ym, set κ0(y) = 0

(e) For each y ∈ Y − Ym, define κ0(y) := +∞ 3. Iterate on k = 1, 2, · · · , as follows:

(a) For each y ∈ Ym, κk(y) := 0 (b) For each y ∈ Y − Ymwe have

κk(y) :=    maxσ∈Σuc∧η(y,σ)!(f ′(y, σ) + κ k−1(η(y, σ))) if (∃σ ∈ Σuc) η(y, σ)! minσ′∈Σc∧η(y,σ′)!(f ′(y, σ) + κ k−1(η(y, σ′))) if ∅ 6= {σ ∈ Σ|η(y, σ)!} ⊆ Σc κk−1(y) otherwise

(c) Termination when: (∃r ∈ N)(∀y ∈ Y ) κr−1(y) = κr(y)

4. If κr(y0) = +∞, KCN := ∅ and go to Step (6). Otherwise, let S′= (Y′, Σ, η′, y0, Ym′) where

(a) Y′:= {y ∈ Y |κ

r(y) < +∞} (b) Y′

m:= Y′∩ Ym

(c) η′: Y× Σ → Y, where for any (y, σ) ∈ Y× Σ, η′(y, σ) :=



η(y, σ) if η(y, σ) ∈ Y′

not defined otherwise

compute the largest controllable and normal sublanguage ˆK ⊆ Lm(S′) with respect to G. If ˆK = ∅ then KCN:= ∅ and go to step (6). Otherwise, continue.

5. Set ˆK0:= ˆK and iterates on r = 0, 1, · · · , as follows: (a) Search a set ψ( ˆKr) := {s ∈ ˆKr|θS(y0, s) = ωS( ˆKr)}.

(16)

(b) Compute the largest controllable and normal sublanguage ˆKr+1⊆ ˆKr−ψ( ˆKr) with respect to G.

(c) If ˆKr+1= ∅ then set KCN:= ˆKrand go to Step (6). Otherwise, continue on r + 1.

6. Output: KCN 

What PSMWCNS does is that, we first compute the largest language Lm(S′) ⊆ Lm(G) ∩ Lm(E), which contains every string allowing a finite reach from the initial state x0 to a marker state x ∈ Xm, even when uncontrollable events happen. From the previous section we can derive that, Lm(S′) = ∪s∈Lm(S′)c(y0, s), and furthermore, Lm(S

) is controllable. But it may not be normal. Thus, at step (4) in PSMWCNS we check whether the largest controllable and normal sublanguage ˆK of Lm(S′) is nonempty. If ˆK = ∅, then we know that, supWCN (G, E) = ∅. Otherwise, we continue to search at step (5) for the largest controllable and normal sublanguage of Lm(S′) with respect to G that has the minimum weight. What step (5) does is that, we first remove all strings with the maximum weight from ˆKr, where the set of all strings with the maximum weight ψ( ˆKr) is computable by the Dijkstra algorithm, except that, instead of searching for the minimum-weight paths in the Dijkstra algorithm, we search for the maximum-weight paths. Then we compute the largest controllable and normal sublanguage ˆKr+1⊆ ˆKr− ψ( ˆKr) with respect to G. If ˆKr+1 = ∅, then we can show that ˆKr = supWCN (G, E). Otherwise, we know that the weight of the supremal minimum-weight controllable and normal sublanguage is no bigger than ωS( ˆKr+1), which is smaller than ωS( ˆKr). Thus, we continue our search by repeating the previous steps (5.a)-(5.c) on ˆKr+1. Since ˆK is finite, computation at Step

(5) always terminates. We now formally show that KCN output from PSMWCNS is the

supremal minimum-weight controllable and normal sublanguage of G with respect to E.

Theorem 4.1. Given a weighted finite-state automaton (G, f ) with a marking

dead-lock G ∈ φ(Σ) and a specification E ∈ φ(Σ), let KCN be computed by PSMWCNS.

Then (1) KCN = ∅ if and only if WCN (G, E) = ∅; (2) When KCN 6= ∅, we have

KCN = supWCN (G, E). 

The proof of Theorem 4.1 is provided in the Appendix. To illustrate the procedure PSMWCNS, we consider the following example depicted in Figure 5, where Σ = {a, b, c, d, e}, Σc= {a, b, e} and Σo= {a, e}. The cost function f is as follows:

f (x0, a) = 1, f (x0, e) = 10, f (x1, a) = 2, f (x1, b) = 2, f (x1, d) = 6, f (x2, a) = 2, f (x2, c) = 1, f (x2, d) = 1

The specification E is depicted in Figure 6. We first compute the largest controllable and normal sublanguage K ⊆ Lm(G) ∩ Lm(E) with respect to G, which is depicted in Figure 7. The cost function f′ for S is defined as follows:

f′(y

0, a) = 1, f′(y0, e) = 10, f′(y1, b) = 2, f′(y1, d) = 6, f′(y2, c) = 1, f′(y2, d) = 1, f′(y5, e) = 10 We now apply PSMWCNS on S. Since there is no loop in S, every state in S has a finite weight. Thus, S′ is the same as S, depicted in Figure 8. Since a is controllable and observable, we can check that Lm(S′) is controllable and normal with respect to G. Thus, ˆK = Lm(S′). By using an algorithm similar to the Dijkstra’s algorithm for solving the single-source shortest-paths problem [1], we can find all strings in Lm(S′) with the maximum weight, which is s = abde and the weight is 14. Thus, ψ( ˆK) = {abde}, and K′ = ˆK − ψ( ˆK) is depicted in Figure 9. We now compute the largest controllable and

(17)

x0 x1 x2 x3 x4 a/1 e/10 b/2 a/2 d/6 c/1 a/2 d/1

Figure 5: Example 2: Plant Model G

q0 a q1

b, c, d, e b, d, e

Figure 6: Example 2: Specification Model E

normal sublanguage K′′ ⊆ Kwith respect to G. It turns out that, K′′ = {e}. The reason is as follows. Since ab ∈ K′, abd /∈ K, abd ∈ L(G) and d ∈ Σ

uc, we get that ab /∈ K′′. Otherwise, K′′ is not controllable with respect to G. Since ab /∈ K′′ and Po(ab) = Po(ad), we have ad /∈ K′′. Otherwise, K′′is not normal with respect to G and Po. Since ad /∈ K′′ and d ∈ Σuc, we get that a /∈ K′′. Otherwise, K′′ is not controllable with respect to G. We set ˆK = K′′and repeat the previous computation. Clearly, after taking out e, we get K′ = ∅, which means K′′= ∅. Thus, the final output is ˆK = {e}, and the corresponding weight is 10.

5 Conclusions

In this paper we first define the concepts of supremal minimum-weight controllable sublan-guages and supremal minimum-weight controllable and normal sublansublan-guages, and present two minimum-weight supervisory control problems, where full and partial observation are considered respectively. Then we provide an algorithm PSMWCS to compute the supremal minimum-weight controllable sublanguages. We have shown that, the algo-rithm terminates within a number of steps no more than the number of states of the plant model. After that, we present a terminable algorithm PSMWCNS to compute the supremal minimum-weight controllable and normal sublanguages. The supervisory control problems are formulated in a centralized manner, namely we have one plant and one specification. In reality, we may encounter high computational complexity during centralized synthesis. Thus, it is of our primary interest to investigate whether a similar approach can be applied to a hierarchical and distributed system, which will be addressed in our future papers.

(18)

y0 y1 y2 y3 y4 y5 a/1 e/10 b/2 d/6 c/1 d/1 e/10

Figure 7: Example 2: Automaton Model S

y0 y1 y2 y3 y4 y5 a/1 e/10 b/2 d/6 c/1 d/1 e/10

Figure 8: Example 2: Automaton Model S′

Acknowledgement:

We would like to thank Dr. Albert T. Hofkamp of the Systems Engineering Group at Eindhoven University of Technology for coding all algorithms mentioned in this paper.

Appendix

1. Proof of Lemma 3.3: We use induction. When k = 0, for any y ∈ Ym we have

̺(y, 0) = {{ǫ}} 6= ∅; and for any y ∈ Y − Ym we have ̺(y, 0) = ∅. When ̺(y, 0) 6= ∅, by the definition we have θS(y, ǫ) = 0. Thus, the claim holds for k = 0. Suppose it holds for k ≥ 0. We need to show that it holds for k + 1. To this end, we consider three cases. Case 1: suppose there exists σ ∈ Σuc such that η(y, σ)!. Then

κk+1(y) = max

σ∈Σuc∧η(y,σ)!

(f′(y, σ) + κk(η(y, σ)))

(19)

y0 y1 y2 y3 y4 a/1 e/10 b/2 d/6 c/1

Figure 9: Example 2: Automaton that Recognizes K′

Let y′

σ = η(y, σ). By the induction hypothesis we have κk(yσ′) = υk(y′σ) =



minW∈̺(y′σ,k)maxs′∈WθS(yσ′, s′) if ̺(y′σ, k) 6= ∅

+∞ otherwise

If there exists σ ∈ Σuc such that ̺(y′σ, k) = ∅ then, since S is nonblocking, we get that, for any C′ ∈ ∪

r∈N̺(y′σ, r), there exists s′ ∈ C′ such that |s′| > k. We now show that, ̺(y, k + 1) = ∅. Suppose it is not true. Then let C ∈ ̺(y, k + 1). Since σ ∈ Σuc, by the definition of ≻y we get that, there exists C′ ∈ ∪r∈N̺(y′σ, r) such that σC′ ⊆ C. Thus, there exists σs′∈ C such that |σs| = |s| + 1 > k + 1, which contradicts the assumption that C ∈ ̺(y, k + 1). Thus, ̺(y, k + 1) = ∅, which means υk+1(y) = +∞. On the other hand, since κk(y′σ) = +∞ we get that κk+1(y) = +∞. Thus, κk+1(y) = υk+1(y).

If for any σ ∈ Σuc with η(y, σ)!, we have ̺(η(y, σ), k) 6= ∅, then for any s = σ′t with σ′ ∈ Σc, and any C ∈ γ(y, s) ∩ ̺(y, k + 1), there exists s′= σ′′t′ with σ′′∈ Σuc such that there exists C′∈ γ(y, s) ∩ ̺(y, k + 1) with C⊆ C. Thus,

υk+1(y) = min

C∈̺(y,k+1)maxs′∈CθS(y, s

) = min

C∈γ(y,s′)∩̺(y,k+1)∧s′∈ΣucΣ∗

max s′∈CθS(y, s

) (1)

Thus, υk+1(y) can be determined only based on strings starting with an uncontrollable event. For any C ∈ γ(y, s′) ∩ ̺(y, k + 1) with s∈ Σ

ucΣ∗, by the definition of ≻y we can derive that, for each σ ∈ Σuc there exists C′(σ) ∈ ̺(η(y, σ), k) such that C = ∪σ∈ΣucσC

(σ). For notation simplicity, let d(C(σ)) := max

s′∈C′(σ)θS(η(y, σ), s′). By Equation (1) We have υk+1(y) = min {C′(σ)∈̺(η(y,σ),k)|σ∈Σuc} max σ∈Σuc:η(y,σ)! (f′(y, σ) + d(C′(σ))) (2)

We will show that, from Equation (2) we can derive

υk+1(y) = max

σ∈Σuc:η(y,σ)!

(f′(y, σ) + min

C′(σ)∈̺(η(y,σ),k)d(C

(σ))) (3)

To this end, suppose

f′(y, σ∗) + d(C∗(σ∗)) = min

{C′(σ)∈̺(η(y,σ),k)|σ∈Σuc}

max σ∈Σuc:η(y,σ)!

(f′(y, σ) + d(C′(σ))) Then we have that, for any {C′(σ) ∈ ̺(η(y, σ), k)|σ ∈ Σ

uc}, f′(y, σ∗) + d(C∗(σ∗)) ≤ max

σ∈Σuc:η(y,σ)!

(20)

In particular, we choose C′(σ) as arg min

C′(σ)∈̺(η(y,σ),k)d(C′(σ)). Thus, we have min{C′(σ)∈̺(η(y,σ),k)|σ∈Σuc}maxσ∈Σuc:η(y,σ)!(f

(y, σ) + d(C(σ)))

maxσ∈Σuc:η(y,σ)!(f

(y, σ) + min

C′(σ)∈̺(η(y,σ),k)d(C′(σ))) To show the opposite direction of inequality, let

f′(y, σ∗) + d(C∗(σ∗)) = max σ∈Σuc:η(y,σ)!

(f′(y, σ) + min

C′(σ)∈̺(η(y,σ),k)d(C ′(σ)))

Thus, we have that, for any C′(σ) ∈ ̺(η(y, σ), k), f′(y, σ) + d(C)) ≤ max

σ∈Σuc:η(y,σ)!

(f′(y, σ) + d(C(σ))) In particular, we choose {C′(σ) ∈ ̺(η(y, σ), k)|σ ∈ Σ

uc} as arg min {C′(σ)∈̺(η(y,σ),k)|σ∈Σuc} max σ∈Σuc:η(y,σ)! (f′(y, σ) + d(C′(σ))) Thus, we have

min{C′(σ)∈̺(η(y,σ),k)|σ∈Σuc}maxσ∈Σuc:η(y,σ)!(f

(y, σ) + d(C(σ))) ≥ maxσ∈Σuc:η(y,σ)!(f ′(y, σ) + min C′(σ)∈̺(η(y,σ),k)d(C′(σ))) which means

min{C′(σ)∈̺(η(y,σ),k)|σ∈Σuc}maxσ∈Σuc:η(y,σ)!(f

(y, σ) + d(C(σ))) =

maxσ∈Σuc:η(y,σ)!(f

(y, σ) + min

C′(σ)∈̺(η(y,σ),k)d(C′(σ))) Thus, Equation (3) is true, from which we can derive that,

υk+1(y) = max

σ∈Σuc:η(y,σ)!

(f′(y, σ) + υ

k(η(y, σ))) By using the hypothesis of induction we get

υk+1(y) = max

σ∈Σuc:η(y,σ)!

(f′(y, σ) + κk(η(y, σ))) = κk+1(y) Thus, the lemma holds for Case 1.

Case 2: suppose there are only σ ∈ Σc such that η(y, σ)!. By the definition we have

κk+1(y) = min

σ′∈Σc∧η(y,σ′)!

(f′(y, σ) + κk(η(y, σ′))) By the induction hypothesis we have

κk+1(y) = min

σ′∈Σc∧η(y,σ′)!

(f′(y, σ) + υk(η(y, σ′))) By the definition of υk+1(y) and we have

υk+1(y) = 

minC∈̺(y,k+1)maxs∈CθS(y, s) if ̺(y, k + 1) 6= ∅

+∞ otherwise

For any C ∈ ̺(y, k + 1) there exists σ′s∈ Σwith σ∈ Σ

c such that C ∈ {σ′C′|C′ ∈ γ(η(y, σ′), s)}. Thus, max

s∈CθS(y, s) = f′(y, σ′) + maxs′∈C′θS(η(y, σ′), s′).

Further-more, ̺(y, k + 1) 6= ∅ if and only if for some σ′ ∈ Σ

c with η(y, σ′)!, ̺(η(y, σ′), k) 6= ∅. If ̺(y, k + 1) = ∅, then we have that, for any σ′ ∈ Σc with η(y, σ′)!, ̺(η(y, σ′), k) = ∅, namely υk(η(y, σ′)) = +∞. Thus, by the induction hypothesis we have κk(η(y, σ′)) =

+∞, which means κk+1(y) = +∞. On the other hand, since ̺(y, k + 1) = ∅, we have

υk+1(y) = +∞. Thus, κk+1(y) = υk+1(y). If ̺(y, k + 1) 6= ∅, we can derive that, min

C∈̺(y,k+1)maxs∈C θS(y, s) = σ′∈Σminc∧η(y,σ′)!

(f′(y, σ′) + min

C′∈̺(η(y,σ′),k)smax′∈C′θS(η(y, σ

), s))

= min

σ′∈Σc∧η(y,σ′)!

(f′(y, σ′) + υk(η(y, σ′)))

(21)

Thus, by the induction hypothesis, we can still get that κk+1(y) = υk+1(y).

Case 3: suppose there is no σ ∈ Σ such that η(y, σ)!. Since S is nonblocking, we have that y ∈ Ym. Clearly, ̺(y, k + 1) = ̺(y, k) = {{ǫ}}. Thus, υk+1(y) = υk(y) = 0. By the definition we have κk+1(y) = κk(y). By the induction hypothesis we have υk(y) = κk(y). Thus, κk+1(y) = υk+1(y), which means the lemma holds for Case 3.

Since in any case the lemma holds, we can conclude that the lemma is true. 

2. Proof of Lemma 3.4: For any C ∈ ̺(y, |Y |), there are two possibilities: either C ∈ ̺(y, |Y | − 1), or C /∈ ̺(y, |Y | − 1). For the latter case, there exists s ∈ Σ∗ such that |s| = |Y |. Let C′ ∈ γ(y, s) be a maximal chain of s with respect to y, and C⊆ C. Such a C′ must exist because C is a maximal chain. Let ˆS = ( ˆY , Σ, ˆη, ˆy

0, ˆYm) be a tree automaton such that

1. ˆY := C′

2. ˆYm:= C′ and ˆy0:= ǫ

3. ˆη : ˆY × Σ → ˆY , where for any ˆy, ˆy′ ∈ ˆY and σ ∈ Σ, ˆy′∈ ˆη(ˆy, σ) if ˆy′= ˆyσ

Since C′ ∈ ̺(y, |Y |), the state set ˆY is finite. Thus, ˆS is well defined. We can easily check that, Lm( ˆS) = C′ and L( ˆS) = C′. Let T = ( ˆY , Σ ∪ {τ }, δ, ˆy0, ˆYm) be a new tree automaton, where τ /∈ Σ and δ is equal to ˆη when restricted to ˆY × Σ. For any ˆy, ˆy′∈ ˆY , if η(y, ˆy) = η(y, ˆy′) and there exists σ ∈ Σc such that ˆyσ ≤ ˆy′ (remember that ˆy and ˆy′ are strings), then add an edge τ from ˆy to ˆy′ in T , namely δ(ˆy, τ ) = ˆy. We now use the following procedure to modify T .

1. Initially set T0= ( ˆY0, Σ ∪ {τ }, δ0, ˆy0, ˆYm,0) := T

2. Suppose we have Tk= ( ˆYk, Σ ∪ {τ }, δk, ˆy0, ˆYm,k) we construct Tk+1 as follows: (a) Pick two states ˆyk, ˆyk′ ∈ ˆYk such that δk(ˆyk, τ ) = ˆy′k. Let

g(ˆyk, ˆy′k) := {ˆy ′′

k ∈ ˆYk|(∃u, u′∈ Σ∗) u 6= ǫ ∧ δk(ˆyk, u) = ˆy′′k ∧ δk(ˆy′′k, u ′) = ˆy

k} If no such two states exist, then g(ˆyk, ˆy′k) := ∅

(b) ˆYk+1:= ˆYk− g(ˆyk, ˆyk′) and ˆYm,k+1:= ˆYm,k∩ ˆYk+1

(c) δk+1: ˆYk+1×(Σ∪{τ }) → ˆYk+1, where for any ˆy, ˆy′ ∈ ˆYk+1and σ ∈ (Σ∪{τ }), if δk(ˆyk, σ) = ˆyk′ then δk+1(ˆyk, σ) = ˆyk′; if δk(ˆy′k, σ)! then δk+1(ˆyk, σ) := δk(ˆyk′, σ) Terminate when there exists r ∈ N such that Tr−1= Tr.

3. In Tr for each ˆy ∈ ˆYr if there exists σ ∈ Σuc such that δr(ˆy, σ)!, then remove all controllable edges at ˆy. Let ˆT be the resultant reachable tree.

Since T is a tree, the above procedure will terminate in a finite number of iterations. We now show that, ˆC := Lm( ˆT ) is a maximal chain in ̺(y, |Y | − 1). Since C′ is a maximal chain, we can derive that,

(∀u ∈ ˆC) |{σ ∈ Σ|uσ ∈ ˆC} ∩ Σc| ≤ 1

By the construction of ˆT and the fact that C′ is a maximal chain, we can derive that (∀t ∈ ˆC)(∀σ ∈ Σuc) η(y, tσ)! ⇒ tσ ∈ ˆC

(22)

Thus, ˆC is a maximal chain in ̺(y, |Y |). We now need to show that, ˆC ∈ ̺(y, |Y | − 1). Suppose it is not true. Then there exists t ∈ ˆC such that |t| = |Y |. By the Pumping Lemma, there must exist t1, t2, t3 ∈ Σ∗ such that t = t1t2t3, t2 6= ǫ and η(y, t1) = η(y, t1t2). By the construction of ˆT we know that, there exists σ ∈ Σuc and t′2∈ Σ∗such that t2= σt′2. Furthermore, by the definition of ˆT we can derive that, for any t′′2σ′≤ t′2, either σ′∈ Σ

uc or for any σ′′∈ Σ, if δr(δr(y, t1σt′′2), σ′′)! then σ′′= σ′ (namely σ′ is the only exit transition at the state δr(y, t1σt′′2) if σ′ ∈ Σc. This means that, for any string s′∈ C, if η(y, s) = η(y, δ

r(y, t1)), then for any n ∈ N, s′(t2)n∈ C′. But this contradicts the assumption that C′ ∈ ̺(y, |Y |). Therefore, ˆC ∈ ̺(y, |Y | − 1). Clearly,

max ˆ u∈ ˆC θS(y, ˆu) ≤ max u′∈C′θS(y, u ′) ≤ max u∈CθS(y, u)

Thus, Lemma 2 is true. 

3. Proof of Lemma 3.6: Let C ∈ γ(y0, s). If C is not controllable with respect to G, then there exists s′ ∈ C and σ ∈ Σ

uc such that s′σ ∈ L(G) but s′σ /∈ C. Since C ⊆ Lm(S), which is controllable with respect to G, we get that s′σ ∈ L

m(S). Thus, there exists s′′ ∈ Σsuch that sσs′′ ∈ L

m(S). But this means s ≻y0 s

σs′′ - contradicting the as-sumption that C is a maximal chain.

Let K ⊆ Lm(S) be a nonempty controllable sublanguage with respect to G. Pick s ∈ K and let c(y0, s) be a maximal chain of s with respect to y0. Suppose c(y0, s) − K 6= ∅. For any s′ ∈ c(y

0, s) − K, clearly, s′ ∈ c(y0, s) but s′ ∈ K. Since s ∈ c(y/ 0, s), we have ǫ ∈ c(y0, s). Thus, there exist t, t′∈ Σ∗and σ ∈ Σ such that s′= tσt′, t ∈ K but tσ /∈ K. Since K is controllable with respect to G, we get that σ ∈ Σc. On the other hand, we have K ⊆ Lm(S). Thus, there exists t′′∈ Σ∗such that s′′= t′t′′∈ K. Clearly c′(y0, s) := (c(y0, s) ∪ {s′′}) − {s′} is also a maximal chain of s with respect to y0. If we define a choice map g : Σ∗→ Σ, which maps each s∈ c(y

0) − K to a string s′′∈ K, as described above, we can show that, ˆc(y0, s) := (c(y0, s) ∪ {g(s′)|s′ ∈ c(y0, s) − K}) − (c(y0, s) − K) is a maximal chain of s with respect to y0, and ˆc(y0, s) ⊆ K. Thus, the lemma is true.

4. Proof of Theorem 3.7: (1) We first show that, if ˆK = ∅, then WC(G, E) = ∅. By

PSMWCS, if ˆK = ∅, then either Lm(G) ∩ Lm(E) has no controllable sublanguage or κk(y0) = +∞. For the former case, clearly, WC(G, E) = ∅. For the latter case, by the proof of Theorem 3.5 we get that, υk(y0) = +∞, which, by Lemma 3.6, indicates that, for any controllable sublanguage K ⊆ Lm(G) ∩ Lm(E) we have ωG(K) = ωS(K) = +∞. Thus, WC(G, E) = ∅.

We now show that, if WC(G, E) = ∅, then ˆK = ∅. Since WC(G, E) = ∅, either Lm(G)∩ Lm(E) has no controllable sublanguage or there is no controllable sublanguage with a finite weight. For the former case, clearly ˆK = ∅ (see step (2.1)-(2.2) in PSMWCS). For the later case, by Lemma 3.3 and Lemma 3.6 we have κk(y0) = +∞. Thus, ˆK = ∅. (2) Next, suppose ˆK 6= ∅ and we want to show that ˆK = supWC(G, E). To this end, we first show that ˆK ∈ WC(G, E). Clearly, ˆK ⊆ Lm(S) = Lm(G) ∩ Lm(E). Furthermore, ωG( ˆK) = ωS( ˆK) = κk(y0) < +∞. Thus, we only need to show that, ˆK is controllable with respect to G and Σuc. To this end, for each state y ∈ Y′ we have

κk(y) =    maxσ∈Σuc∧η(y,σ)!(f ′(y, σ) + κ k−1(η(y, σ))) if (∃σ ∈ Σuc) η(y, σ)! minσ′∈Σc∧η(y,σ′)!(f ′(y, σ) + κ

k−1(η(y, σ′))) else if (∃σ ∈ Σc) η(y, σ)!

κk−1(y) otherwise

For any σ ∈ Σuc, if η(y, σ)! then κk(η(y, σ)) = κk−1(η(y, σ)) < κk(y). Thus, by the definition of S′ we get that, η(y, σ) ∈ Y. Furthermore, by induction we can show that,

(23)

there exist y1, · · · , yn∈ Y′ and σ1, · · · , σn∈ Σ with yn∈ Ymsuch that y1= η′(y, σ1) ∧ (∀i ∈ {2, 3, · · · , n}) yi= η′(yi−1, σ)

Therefore, ˆK = Lm(S′) = L(S). Thus, ˆK = Lm(S′) is controllable with respect to G. Next, we show that ωG( ˆK) = ωG(supWC(G, E)). By the construction of S we get that, ωG( ˆK) = ωS( ˆK) and ωG(supWC(G, E)) = ωS(supWC(G, E)). For any K ∈ WC(G, E) ⊆ Lm(S), it is controllable with respect to G. Thus, by Lemma 3.6, K contains a maximal chain c(y0, s) for some s ∈ Lm(S). Since ωG(K) < +∞, we get that, there exists r ∈ N such that c(y0, s) ∈ ̺(y0, r). If r ≤ k, from the proof of Theorem 3.5 we get that,

ωG( ˆK) = υk(y0) = min C∈̺(y0,k) max s′∈CθS(y0, s ′) ≤ max s′∈c(y0,s) θS(y0, s′) = ωG(c(y0, s)) ≤ ωG(K) When r > k, by the proof of Theorem 3.5 we get that, υr(y0) = κr(y0) = κk(y0) = υk(y0). Thus, ωG( ˆK) = υk(y0) = υr(y0) = min C∈̺(y0,r) max s′∈CθS(y0, s ′) ≤ ω G(c(y0, s)) ≤ ωG(K) Thus, in any case we have ωG( ˆK) ≤ ωG(supWC(G, E)). On the other hand, since ˆK ∈ WC(G, E), we get that ωG( ˆK) ≥ ωG(supWC(G, E)). Thus, ωG( ˆK) = ωG(supWC(G, E)).

Finally, we show that ˆK = supWC(G, E). We have shown that ˆK ∈ WC(G, E), which

means ˆK ⊆ supWC(G, E). Thus, we only need to show the opposite inclusion. For any s ∈ supWC(G, E), let c(y0, s) be a maximal chain of y0. Clearly, c(y0, s) ⊆ supWC(G, E) because, otherwise supWC(G, E) is not controllable. So we have

ωS(c(y0, s)) ≤ ωS(supWC(G, E)) = ωS( ˆK)

Thus, c(y0, s) ⊆ Lm(S′) = ˆK, which means s ∈ ˆK. Therefore, supWC(G, E) ⊆ ˆK. 

5. Proof of Theorem 4.1: (1) Suppose KCN = ∅. Then we have three cases to consider: Case 1.1: K = ∅, namely Lm(G) ∩ Lm(E) has no controllable and normal sublanguage; Case 1.2: Step 3 terminates at k and κk(y0) = +∞; Case 1.3: Lm(S′) 6= ∅ but ˆK = ∅, namely Lm(S′) has no controllable and normal sublanguage with respect to G. For Case 1.1, clearly, WCN (G, E) = ∅. For Case 1.2, by the proof of Theorem 3.7 we get that there is no controllable and normal sublanguage of Lm(G) ∩ Lm(E) with respect to G with a finite weight. Thus, WCN (G, E) = ∅. For Case 1.3, again it means there is no controllable and normal sublanguage of Lm(G) ∩ Lm(E) with a finite weight. Thus, WCN (G, E) = ∅. On the other hand, if WCN (G, E) = ∅, then either Lm(G) ∩ Lm(E) has no controllable and normal sublanguage or there is no controllable and normal sub-language with a finite weight. In the former case, clearly, KCN6= ∅ because K = ∅. In the latter case, since every controllable and normal sublanguage of Lm(G) ∩ Lm(E) is a controllable sublanguage of K, we have two subcases to consider. Subcase 1: there is no controllable sublanguage of K with a finite weight. Then by Lemma 3.3 and Lemma 3.6, when the Step 3 of PSMWCNS terminates at k, we have κk(y0) = +∞. Thus, KCN = ∅. Subcase 2: Every controllable sublanguage of K with a finite weight contains no control-lable and normal sublanguage with a finite weight. In this subcase we can derive that,

ˆ

K = ∅. Thus, again KCN = ∅.

(2) Suppose KCN 6= ∅. We want to show that, KCN = supWCN (G, E). Clearly,

ˆ

K0= ˆK ∈ WCN (G, E). Furthermore, for any W ∈ WCN (G, E) we have W ⊆ ˆK = ˆK0. In particular, supWCN (G, E) ⊆ ˆK0. Suppose Step (5) terminates at r + 1 with r ≥ 0. We use induction to show that, supWCN (G, E) ⊆ ˆKr. It is true for l = 0. We as-sume that it is also true for l and we need to show that, it is true for l + 1 ≤ r. Since supWCN (G, E) ⊆ ˆKl, we have

(24)

For any s ∈ ˆKlwith θS(y0, s) = ωS( ˆKl), we have s ∈ ψ( ˆKl). Since ˆKr6= ∅ and l < r, we have ˆKl− ψ( ˆKl) 6= ∅. Then

ωS( ˆKl− ψ( ˆKl)) < ωS( ˆKl)

Let ˆKl+1 be the largest controllable and normal sublanguage of ˆKl− ψ( ˆKl) with respect to G. Since l + 1 ≤ r, we have ˆKl+16= ∅. Thus, ˆKl+1∈ WCN (G, E), which means

ωG(supWCN (G, E)) = ωS(supWCN (G, E)) ≤ ωS( ˆKl+1) < ωS( ˆKl)

Since supWCN (G, E) ⊆ ˆKl and for any controllable and normal sublanguage W ⊆ ˆKl with respect to G, we have

ωS(W ) < ωS( ˆKl) ⇒ W ⊆ ˆKl+1 we have

supWCN (G, E) ⊆ ˆKl+1

Thus, the induction is true, namely supWCN (G, E) ⊆ ˆKr. Since ˆKr+1 = ∅, for any

controllable and normal sublanguage W ⊆ ˆKr with respect to G, we get that, W ∩

ψ( ˆKr) 6= ∅, which means ΩS(W ) = ΩS( ˆKr). Since supWCN (G, E) ⊆ ˆKr, we have ωS(supWCN (G, E)) = ωS( ˆKr). Thus, supWCN (G, E) = ˆKr= KCN, and the theorem

follows. 

(25)
(26)

Bibliography

[1] E.W. Dijkstra. A note on two problems in connexion with graphs. Numerische Mathematik, 1:269–271, 1959.

[2] P.J. Ramadge and W.M. Wonham. Supervisory control of a class of discrete event systems. SIAM J. Control and Optimization, 25(1):206–230, 1987.

[3] W.M. Wonham and P.J. Ramadge. On the supremal controllable sublanguage of a given language. SIAM J. Control and Optimization, 25(3):637–659, 1987.

[4] O. Maler, A. Pnueli and J. Sifakis. On the synthesis of discrete controllers for timed systems. In Proc. 12th Symposium on Theoretical Aspects of Computer Science (STACS95), volume 900 of LNCS, pages 229-242, 1995.

[5] E. Asarin, O. Maler, A. Pnueli and J. Sifakis. Controller synthesis for timed au-tomata. In Proc. IFAC Symposium on System Structure and Control, pages 469-474, 1998.

[6] E. Asarin and O. Maler. As soon as possible: time optimal control for timed au-tomata. In Proc. 2nd International Workshop on Hybrid Systems: Computation and Control (HSCC99), volume 1569 of LNCS, pages 19-30, 1999.

[7] Y. Brave and M. Heymann. On optimal attraction of discrete-event processes. In-ternational Journal of Information Sciences, 67(3):245-276, 1993.

[8] R. Kumar and V. Garg. Optimal supervisory control of discrete event dynamical systems. SIAM J. Control and Optimization, 33(2):419-439, 1995.

[9] K. Passino and P. Antsaklis. On the optimal control of discrete event systems. In Proc. 28th IEEE Decision and Control Conference (CDC89), pages 2713-2718, 1989. [10] R. Sengupta and S. Lafortune. An optimal control theory for discrete event systems.

SIAM J. Control and Optimization, 36(2):488-541, 1998.

[11] H. Marchand, O. Boivineau and S. Lafortune. Optimal control of discrete event systems under partial observation. In Proc. 40th IEEE Conference on Decision and Control (CDC01), pages 2335-2340, 2001.

[12] J. Pu, CM. Lagoa, and A. Ray. Robust optimal control of regular languages with event cost uncertainties. In Proc. 42th IEEE Conference on Decision and Control (CDC03), pages 3209-3214, 2003.

[13] Steve Alpern. Cycles in extensive form perfect information games. Journal of Math-ematical Analysis and Applications, 159(1):1-17, 1991.

[14] T. Basar and G.J. Olsder. Dynamic Noncooperative Game Theory. SIAM, 1999. [15] C. Berge. Topological games with perfect information. In Contributions to the

Theory of Games, volume 3 of Annals of Mathematical Studies Vol. 39, page 165178. Princeton Univ. Press, Princeton, NJ, 1953.

[16] C. Berge. The Theory of Graphs. Methuen, London, 1962.

[17] K. Chatterjee, T.A. Henzinger, and M. Jurdzinski. Mean-payoff parity games. Logic in Computer Science, 2005. LICS 2005. Proceedings. 20th Annual IEEE Symposium on, pages 178–187, June 2005.

[18] A. Ehrenfeucht and J. Mycielski. Positional strategies for mean payoff games. In-ternat. J. Game Theory, 8:109113, 1979.

(27)

[19] H. Everett. Recursive games. In Contributions to the Theory of Games, volume 3 of Annals of Mathematical Studies Vol. 39, page 4778. Princeton Univ. Press, Princeton, NJ, 1953.

[20] H. Kushner and S. Chamberlain. Finite state stochastic games: Existence theo-rems and computational procedures. Automatic Control, IEEE Transactions on, 14(3):248–255, Jun 1969.

[21] M.J. Osborne and A. Rubinstein. A course in game theory. MIT, 1994.

[22] Stephen D. Patek, Dimitri, and P. Bertsekas. Stochastic shortest path games. SIAM Journal on Control and Optimization, 37:804–824, 1999.

[23] L. S. Shapley. Stochastic games. Proc. Nat. Acad. Sci. U. S. A., 39:1095–1100, 1953. [24] Alan Washburn. Deterministic graphical games. J. Math. Anal. Appl., 153(1):84–96,

1990.

[25] S. Tripakis and K. Altisen. On-the-fly controller synthesis for discrete and dense-time systems. In Proc. of World Congress on Formal Methods (FM99), volume 1708 of LNCS, pages 233-252, 1999.

[26] Y. Li, F. Lin and Z. Lin. Supervisory control of probabilistic discrete-event systems with recovery. IEEE Trans. Autom. Control, 44(10):1971–1975, 1999.

[27] R. Alur, S. La Torre and G.J. Pappas. Optimal paths in weighted timed automata. In Proc. 4th International Workshop on Hybrid Systems: Computation and Control (HSCC01), volume 2034 of LNCS, pages 49-62, 2001.

[28] R. Kumar and V.K. Garg. Control of stochastic discrete event systems modeled by probabilistic languages. IEEE Trans. Autom. Control, 46(4):593–606, 2001.

[29] M. Mohri. Edit-distance of weighted automata: general definitions and algorithms. International Journal of Foundations of Computer Science, 14(6):957-982, 2003. [30] M. Mohri, F. Pereira and M. Riley. Weighted finite-state transducers in speech

recognition. Computer Speech & Language, 16(1):69-88, 2002.

[31] S. La Torre, S. Mukhopadhyay and A. Murano. Optimal-reachability and control for acyclic weighted timed automata. In Proc. 2nd IFIP Conference on Theoretical Computer Science (TCS02), pages 485-497, 2002.

[32] R. Su and W.M. Wonham. Probability measure on regular languages and its ap-plication in fault diagnosis for discrete-event systems. In 5th IEEE Asian Control Conference, pages 412-419, 2004.

[33] J. Yi, S. Ding, M.T. Zhang, M.T. and P. van der Meulen. Throughput analysis of linear cluster tools. In proc. 3rd IEEE International Conference on Automation Science and Engineering (CASE07), pages 1063-1068, 2007.

[34] W. M. Wonham (2007). Supervisory Control of Discrete-Event

Sys-tems. Systems Control Group, Dept. of ECE, University of Toronto. URL:

Referenties

GERELATEERDE DOCUMENTEN

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of

Vinorelbine plus cisplatin versus docetaxel plus gemcitabine in advanced non-small-cell lung cancer: a phase III randomized trial. Martoni A, Marino A, Sperandi F,

Ook voor de bewerking voor de nieuwe Fossielenatlas zijn er nog geen meldingen van deze fragiele soort bin- nengekomen.

The second subquestion, “Which adoption model can measure the adoption of blockchain digital as- set management systems?”, was answered by providing evidence that the UTAUT2

Interestingly, constant evaporation of liquid nitrogen (a perfectly wetting liquid) close to the wall is associated with a contact angle θ as predicted by 27 , which value depends

The problem is that the Pareto ranking algorithm compares the numerical ob- jective function values of scenarios, based on a small, equal number of simulation replications.. It

om te komen tot nieuwe planologische en wetenschappelijk gefundeerde ruimtelijke modellen. De manier om de ruimtelijke ontwikkelingen aan te sturen kan dan ook: 1) functione- ren

Door de nieuwe vormen van samenwerking in het project Kennisdoorstroming van WUR naar AOC, ontstaat aan beide zijden meer inzicht en begrip voor elkaars cultuur en werk- wijze, en