Model abstraction of nondeterministic finite state automata in
supervisor synthesis
Citation for published version (APA):
Su, R., Schuppen, van, J. H., & Rooda, J. E. (2008). Model abstraction of nondeterministic finite state automata in supervisor synthesis. (SE report; Vol. 2008-03). Technische Universiteit Eindhoven.
Document status and date: Published: 01/01/2008
Document Version:
Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers)
Please check the document version of this publication:
• A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website.
• The final author version and the galley proof are versions of the publication after peer review.
• The final published version features the final layout of the paper including the volume, issue and page numbers.
Link to publication
General rights
Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain
• You may freely distribute the URL identifying the publication in the public portal.
If the publication is distributed under the terms of Article 25fa of the Dutch Copyright Act, indicated by the “Taverne” license above, please follow below link for the End User Agreement:
www.tue.nl/taverne
Take down policy
If you believe that this document breaches copyright please contact us at:
openaccess@tue.nl
Systems Engineering Group
Department of Mechanical Engineering Eindhoven University of Technology PO Box 513 5600 MB Eindhoven The Netherlands http://se.wtb.tue.nl/
SE Report: Nr. 2008-03
Model Abstraction of
Nondeterministic Finite State
Automata in Supervisor Synthesis
Rong Su, Jan H. van Schuppen and Jacobus E. Rooda
1 2ISSN: 1872-1567
SE Report: Nr. 2008-03 Eindhoven, June 2008
Abstract
Blockingness is one of the major obstacles that need to be overcome in the Ramadge-Wonham supervisory synthesis paradigm, especially for large systems. In this paper we propose an abstraction technique to overcome this difficulty. We first provide details of this abstraction technique, then describe how to apply it to supervisor synthesis, in which plant models are nondeterministic but specifications and supervisors are deterministic. We show that a nonblocking state-controllable state-observable (or state-normal) super-visor for an abstraction of a plant under a specification is guaranteed to be a nonblocking state-controllable state-observable (or state-normal) supervisor of the original plant un-der the same specification. The reverse statement is also true, if every observable event is contained in the abstraction and the plant model is marking aware with respect to the alphabet of the abstraction.
1 Introduction
The automaton-based Ramadge-Wonham (RW) supervisory control paradigm first ap-peared in the control literature in 1982, which was subsequently summarized in the well known journal papers [1, 2]. Since then there has been a large volume of literature under the same paradigm. In the RW paradigm the main difficulty of supervisor synthesis for a large system is to achieve the nonblockingness. The reason is that the total number of states of a plant model increases quickly when the number of local components increases, due to the synchronous product which incurs cartesian product over automata. To over-come this difficulty, some authors attempt to introduce sufficient conditions, which allow local supervisor synthesis. For example, in [3] the authors propose the concept of mod-ularity, which is then extended to the concept of local modularity in [4]. When local supervisors are (locally) modular, a globally nonblocking supervisory control is achieved. Nevertheless, testing (local) modularity itself usually imposes prohibitive computational complexity. Another notable work is presented in [7, 8], where, by imposing interface consistency and level-wise controllability among subsystems and local supervisors in a hierarchical setup, a very large nonblocking control problem may be solved, e.g. the system size reaches 1021 in the AIP example [8]. But the approach does not tell how
to deliberately and systematically design interfaces that allow synthesis of local super-visors that satisfy those properties. Instead, it assumes that those interfaces are given before synthesis, as mentioned in [9]. In [10] the authors present an interesting approach, which is aimed to synthesize a state-feedback supervisor. By introducing the concept of state tree structures, the authors propose to represent product states in binary decision diagrams (BDDs), upon which the power of symbolic computation (as manifested by the manipulation of BDDs) is fully utilized. It has been shown in [10] that a system with 1024
states can be accomodated. Nevertheless, this approach is essentially a centralized ap-proach. No matter how efficient the symbolic computational technique is, such efficiency can never completely overcome the complexity issue in an industrial system that usually consists of hundreds or thousands of components. Besides, the proposed approach does not deal with partial observation.
In this paper we will discuss how to synthesize a supervisor by using an appropriate abstraction of a system. Our first contribution is to present a novel automaton-based abstraction technique. The idea of abstraction has been known in the literature, e.g. in [11] abstraction is used in the modular and hierarchical supervisor synthesis; it is also used in [12] for testing the nonblocking property, and in [13] for decentralized control. Nevertheless, their approaches are language-based, which use natural projections. To make sure nonblocking information will not be incorrectly masked out by abstraction, natural projections have to possess the observer property [6], which may not always hold by a natural projection. Although a natural projection can always be modified to become an observer (with respect to a specific language) [14], such a modification has a potential drawback in the sense that the alphabet of the codomain of the projection may be fairly large for the sake of achieving the observer property, and the consequence is that the size of the projected image may not be small enough to allow supervisor synthesis for large systems. Our abstraction technique is automaton-based, thus different from those language-based abstraction techniques. There have been several research works on au-tomaton abstraction, e.g. [15] [24] [25] [26] [27]. [15] aims to achieve weak bisimilarity between an automaton and its abstraction. [24] [25] [26] [27] first use silence events to replace internal events, then apply rewriting rules to ensure that appropriate equivalence relations, e.g. conflict equivalence in [24] [27], supervision equivalence in [25] and synthe-sis equivalence in [26], hold between automata before and after rewriting. Our approach does not use silence events, and its primary goal is to create an abstraction for an au-tomaton G, which is not necessarily weak bisimilar to G, such that any auau-tomaton S,
whose alphabet is the same as that of the abstraction and is nonconflicting with the ab-straction, must be nonconflicting with G. If G is marking aware, then it is automatically true that S is nonconflicting with G implies S is nonconflicting with the abstraction - at this point, our approach is close to achieving conflict equivalence, but with a procedure much simpler than those rewriting rules and no silence events are needed. Our second contribution is to utilize the automaton-based abstraction in supervisor synthesis and provide conditions under which the existence of a nonblocking supervisor for an abstrac-tion of a plant guarantees the existence of a nonblocking supervisor for the original plant, and vice versa. Since abstraction usually results in nondeterministic automata, we con-sider supervisor synthesis with a nondeterministic plant model but with a deterministic specification. The supervisor is required to be deterministic as well. There have been a large volume of work on supervisor synthesis for nondeterministic systems, but most do not use abstraction, e.g. [23] [18] [5] [19] [20] [21]. Although [15] [24] [25] [26] [27] utilize abstraction in synthesis, their abstraction techniques are different from ours. Our third contribution is to introduce the concept of state normality, which allows computing a supremal nonblocking supervisor for a nondeterministic system, which is important for synthesis, especially considering that nondeterministic systems always behave like sys-tems under partial observation.
This paper is organized as follows. In Section II we introduce an abstraction technique over nondeterministic automata. Then in Section III we describe how to apply it to synthesize a deterministic supervisor for a nondeterministic plant model under a deter-ministic specification. After an illustrative example in Section IV, conclusions are stated in Section V. Long proofs are presented in the Appendix.
2 Automaton Abstraction and Relevant Properties
In this section we follow the notations used in [16]. We first briefly review concepts of lan-guages, natural projection, synchronous product and automaton product, then introduce the concept of automaton abstraction. After that, we present properties of abstraction which will be used in supervisor synthesis.
2.1 Concepts of Languages, Automaton Product and Abstraction
Let Σ be a finite alphabet, and Σ∗
the Kleene closure of Σ, i.e. the collection of all finite sequences of events taken from Σ. Given two strings s, t ∈ Σ∗, s is called a
pre-fix substring of t, written as s ≤ t, if there exists s′ ∈ Σ∗
such that ss′
= t, where ss′ denotes the concatenation of s and s′. We use ǫ to denote the empty string of Σ∗
such that for any string s ∈ Σ∗
, ǫs = sǫ = s. A subset L ⊆ Σ∗
is called a language. L = {s ∈ Σ∗|(∃t ∈ L) s ≤ t} ⊆ Σ∗ is called the prefix closure of L. L is called prefix
closed if L = L. Given two languages L, L′ ⊆ Σ∗
, LL′
:= {ss′ ∈ Σ∗|s ∈ L ∧ s′∈ L′}.
Let Σ′⊆ Σ. A map P : Σ∗→ Σ′∗
is the natural projection with respect to (Σ, Σ′
), if 1. P (ǫ) = ǫ 2. (∀σ ∈ Σ) P (σ) := σ if σ ∈ Σ′ ǫ otherwise
3. (∀sσ ∈ Σ∗) P (sσ) = P (s)P (σ)
Given a language L ⊆ Σ∗, P (L) := {P (s) ⊆ Σ′∗|s ∈ L}. For any two languages L, L′ ⊆
Σ∗, we can show that P (LL′) = P (L)P (L′). The inverse image mapping of P is
P−1: 2Σ′∗
→ 2Σ∗
: L 7→ P−1(L) := {s ∈ Σ∗
|P (s) ∈ L}
Given L1⊆ Σ∗1 and L2⊆ Σ∗2, the synchronous product of L1and L2is defined as:
L1||L2:= P1−1(L1) ∩ P2−1(L2) = {s ∈ (Σ1∪ Σ2)∗|P1(s) ∈ L1 ∧ P2(s) ∈ L2}
where P1 : (Σ1∪ Σ2)∗ → Σ∗1 and P2 : (Σ1∪ Σ2)∗ → Σ∗2 are natural projections. It has
been shown [16] that || is commutative and associative. Next, we introduce automaton product and abstraction.
Given a nondeterministic finite-state automaton G = (X, Σ, ξ, x0, Xm), X stands for the
state set, Σ for the alphabet, ξ : X ×Σ → 2Xfor the nondeterministic transition function, x0for the initial state and Xmfor the marker state set. As usual, we extend the domain
of ξ from X × Σ to X × Σ∗
, where ξ(x, sσ) := {x′ ∈ X|(∃x′′∈ ξ(x, s)) x′∈ ξ(x′′
, σ)}. We bring in a new event symbol τ . An automaton G = (X, Σ ∪ {τ }, ξ, x0, Xm) is standardized
if
x0∈ X/ m ∧ (∀x ∈ X) [ξ(x, τ ) 6=∅ ⇐⇒ x = x0] ∧ (∀x ∈ X − {x0})(∀σ ∈ Σ) x0∈ ξ(x, σ)/
A standardized automaton is nothing but an automaton, whose initial state x0 only has
outgoing transitions with the same label τ , and no incoming transitions. For an ordi-nary automaton G = (X, Σ, ξ, x0, Xm) we can convert it into a standardized automaton
by simply: (1) extend the alphabet to Σ ∪ {τ }; (2) add a new state x′
0; (3) define a
new transition map ξ′
such that ξ′
(x′
0, τ ) = {x0} and for any (x, σ) ∈ X × Σ we have
ξ′(x, σ) = ξ(x, σ). The resultant automaton G′ = (X ∪ {x′
0}, Σ ∪ {τ }, ξ ′, x′
0, Xm) is a
standardized automaton. From now on, unless specified explicitly, we assume that each alphabet Σ contains τ , and φ(Σ) is the collection of all standardized finite state automata, whose alphabet is Σ. The role of τ will be explained shortly. We now introduce automa-ton product.
Given two nondeterministic automata Gi = (Xi, Σi, ξi, x0,i, Xm,i) ∈ φ(Σi) (i = 1, 2), the
product of G1 and G2, written as G1× G2, is an automaton in φ(Σ1∪ Σ2) such that
G1× G2= (X1× X2, Σ1∪ Σ2, ξ1× ξ2, (x0,1, x0,2), Xm,1× Xm,2)
where ξ1× ξ2: X1× X2× (Σ1∪ Σ2) → 2X1×X2 is defined as follows,
(ξ1× ξ2)((x1, x2), σ) := ξ1(x1, σ) × {x2} if σ ∈ Σ1− Σ2 {x1} × ξ2(x2, σ) if σ ∈ Σ2− Σ1 ξ1(x1, σ) × ξ2(x2, σ) if σ ∈ Σ1∩ Σ2
Clearly, × is commutative and associative. By a slight abuse of notations, from now on we use G1× G2to denote its reachability part. ξ1× ξ2is extended to X1× X2× (Σ1∪ Σ2)∗→
2X1×X2.
We can easily see that product of two standardized automata is still a standardized au-tomaton. Next, we introduce automaton abstraction, which requires the following concept
of marking weak bisimilarity.
Definition 2.1. Given G = (X, Σ, ξ, x0, Xm), let Σ′ ⊆ Σ and P : Σ∗ → Σ′∗ be the
natural projection. A marking weak bisimulation relation on X with respect to Σ′
is an equivalence relation R ⊆ {(x, x′ ) ∈ X × X|x ∈ Xm ⇐⇒ x′ ∈ Xm} such that, (∀(x, x′ ) ∈ R)(∀s ∈ Σ∗ )(∀y ∈ ξ(x, s))(∃s′ ∈ Σ∗ ) P (s) = P (s′ ) ∧ (∃y′ ∈ ξ(x′ , s′ )) (y, y′ ) ∈ R The largest marking weak bisimulation relation on X with respect to Σ′
is called marking weak bisimilarity on X with respect to Σ′
, written as ≈Σ′,G.
Marking weak bisimulation relation is the same as weak bisimulation relation described in [17], except for the special treatment on marker states. We now introduce abstraction.
Definition 2.2. Given G = (X, Σ, ξ, x0, Xm), let Σ′⊆ Σ. The automaton abstraction of
G with respect to ≈Σ′,Gis an automaton G/ ≈Σ′,G:= (Y, Σ′, η, y0, Ym) where
1. Y := X/ ≈Σ′,G:= {< x >:= {x′∈ X|(x, x′) ∈≈Σ′,G}|x ∈ X}
2. y0:=< x0>
3. Ym:= {y ∈ Y |y ∩ Xm6=∅}
4. η : Y × Σ′→ 2Y, where for any (y, σ) ∈ Y × Σ′
, η(y, σ) := {y′ ∈ Y |(∃x ∈ y)(∃u, u′ ∈ (Σ − Σ′ )∗ ) ξ(x, uσu′ ) ∩ y′ 6=∅}
We can easily check that, if G is standardized, then G/ ≈Σ′,Gis also standardized. The
time complexity of computing G/ ≈Σ′,G is mainly resulted from computing X/ ≈Σ′,G,
which can be estimated as follows. We first define a new automaton G′′
= (X, Σ′
, ξ′′
, x0, Xm),
where for any x, x′ ∈ X and σ ∈ Σ, x′ ∈ ξ′′
(x, σ) if there exist u, u′ ∈ (Σ − Σ′
)∗
such that x′ ∈ ξ(x, uσu′
). Then we compute X/ ≈Σ′,G′′, and we can show that the result is
equal to X/ ≈Σ′,G. The total number of transitions in G′′ is no more than mn2, where
n = |X| and m is the number of transitions in G. Based on a result shown in [22], the time complexity of computing X/ ≈Σ′,G′′ is O(mn2log n) if we ignore the complexity
caused by checking the condition “x ∈ Xm ⇐⇒ x′ ∈ Xm” in Def. 2.1. If we consider
this extra condition, then the overall complexity is O(n(n − 1) + mn2log n), because we
need to check at most n(n − 1) pairs of states.
From now on, when G is clear from the context, we simply use ≈Σ′ to denote ≈Σ′,G, and
use < x >Σ′ for an element of X/ ≈Σ′,G. If Σ′ is also clear from the context, then we
simply use < x > for < x >Σ′.
As an illustration, suppose a standardized automaton G ∈ φ(Σ) is depicted in Figure 1, where the alphabet Σ = {τ, a, b, c}. We take Σ′
= {τ, c}. Then we have X/ ≈Σ′= {< 0 >= {0}, < 1 >= {1, 2, 3}, < 4 >= {4}}
The abstraction G/ ≈Σ′ is depicted in Figure 1. Next, we present properties of automaton
Figure 1: Example 1: A Standardized Automaton G and Automaton Abstraction G/ ≈Σ′
2.2 Properties of Automaton Abstraction
We first define a map B : φ(Σ) → 2Σ∗
with
(∀G ∈ φ(Σ)) B(G) := {s ∈ Σ∗|ξ(x
0, s) 6=∅ ∧ (∃x ∈ ξ(x0, s))(∀s′ ∈ Σ∗) ξ(x, s′)∩Xm=∅}
Any string s ∈ B(G) can lead to a state x, from which no marker state is reachable, i.e. for any s ∈ Σ∗, ξ(x, s) ∩ X
m = ∅. Such a state x is called a blocking state of G,
and we call B(G) the blocking set of G. A state that is not a blocking state is called a nonblocking state. We say G is nonblocking if B(G) =∅. Similarly, define another map N : φ(Σ) → 2Σ∗
with
(∀G ∈ φ(Σ)) N (G) := {s ∈ Σ∗
|ξ(x0, s) ∩ Xm6=∅}
We call N (G) the nonblocking set of G, which is simply the language recognized by G. It is possible that B(G) ∩ N (G) 6=∅, due to nondeterminism.
Definition 2.3. An automaton G = (X, Σ, ξ, x0, Xm) is marking aware with respect to
Σ′ ⊆ Σ, if
(∀x ∈ X − Xm)(∀s ∈ Σ∗) ξ(x, s) ∩ Xm6=∅ ⇒ P (s) 6= ǫ
where P : Σ∗→ Σ′∗
is the natural projection.
If G is marking aware with respect to Σ′
, then any string s reaching a marker state from a non-marker state must contain at least one event in Σ′
. A sufficient and necessary condition to make G marking aware with respect to Σ′
is to put in Σ′
every event that labels a transition from a non-marker state to a marker state, namely {σ ∈ Σ|(∃x ∈ X − Xm)(∃x′ ∈ Xm) x′ ∈ ξ(x, σ)} ⊆ Σ′. We have the following result, which will be
extensively used in the rest of this paper.
Proposition 2.4. Given G ∈ φ(Σ), let Σ′ ⊆ Σ, and P : Σ∗ → Σ′∗ be the natural
projection. Then
1. P (B(G)) ⊆ B(G/ ≈Σ′) and P (N (G)) = N (G/ ≈Σ′).
2. If G is marking aware with respect to Σ′, then P (B(G)) = B(G/ ≈
Σ′).
As an illustration of Prop. 2.4, Figure 2 depicts an example, where Σ = {τ, a, b} and Σ′
Figure 2: Example 2: G and G/ ≈Σ′
and B(G/ ≈Σ′) = {τ, τ b}, namely P (B(G)) ⊂ B(G/ ≈Σ′). In this example, to make G
marking aware with respect to Σ′, a must be included in Σ′. If we set Σ′ = {τ, a} then
P (B(G)) = B(G/ ≈Σ′), as predicted in Prop. 2.4.
We now explain why we introduce the event τ and standardized automata. If we don not use it, then we may not always have P (B(G)) ⊆ B(G/ ≈Σ′) and P (N (G)) ⊆ N (G/ ≈Σ′)
(let alone P (N (G)) = N (G/ ≈Σ′)), which may cause supervisor synthesis based on a
reduced model G/ ≈Σ′ to fail, in the sense that a nonblocking supervisor (whose precise
definition will be given later) of G/ ≈Σ′ may not be a nonblocking one for G. This will
be clear when we introduce supervisor synthesis.
Next, we want to answer the question whether G×S for some automaton S is nonblocking if and only if (G/ ≈Σ′) × S is nonblocking. Given an automaton G = (X, Σ, ξ, x0, Xm),
for each x ∈ X, let
NG(x) := {s ∈ Σ∗|ξ(x, s) ∩ Xm6=∅}
We now introduce the following concept, which will be extensively used in this paper.
Definition 2.5. Given automata Gi = (Xi, Σi, ξi, xi,0, Xi.m) (i = 1, 2), we say G1 is
nonblocking preserving with respect to G2, denoted as G1 ⊑ G2, if B(G1) ⊆ B(G2),
N (G1) = N (G2) and for any s ∈ N (G1),
(∀x1∈ ξ1(x1,0, s))(∃x2∈ ξ2(x2,0, s)) NG2(x2) ⊆ NG1(x1) ∧ [x1∈ X1,m ⇐⇒ x2∈ X2,m]
G1 is nonblocking equivalent to G2, denoted as G1∼= G2, if G1⊑ G2and G2⊑ G1.
Def. 2.5 says that, if G1 is nonblocking preserving with respect to G2then their
individ-ual nonblocking parts are eqindivid-ual, but G2’s blocking behavior may be larger. If blocking
behaviors are also equal, then G1and G2 are nonblocking equivalent. We now present a
few results.
Proposition 2.6. (∀G1, G2∈ φ(Σ))(∀G3∈ φ(Σ′)) G1⊑ G2⇒ G1× G3⊑ G2× G3
Corollary 2.7. (∀G1, G2∈ φ(Σ))(∀G3∈ φ(Σ′)) G1= G∼ 2⇒ G1× G3∼= G2× G3.
Proof: Since G1∼= G2, by Def. 2.5 we have G1 ⊑ G2 and G2⊑ G1. Then by Prop. 2.6
Prop. 2.6 and Cor. 2.7 say nonblocking preserving and equivalence are invariant under automaton product.
Proposition 2.8. Given Gi ∈ φ(Σi) with i = 1, 2, let Σ′ ⊆ Σ1∪ Σ2. If Σ1∩ Σ2 ⊆ Σ′,
then
1. (G1× G2)/ ≈Σ′⊑ (G1/ ≈Σ1∩Σ′) × (G2/ ≈Σ2∩Σ′).
2. If Gi (i = 1, 2) is marking aware with respect to Σi∩ Σ′, then
(G1× G2)/ ≈Σ′∼= (G1/ ≈Σ1∩Σ′) × (G2/ ≈Σ2∩Σ′)
Proposition 2.8 is about the distribution of automaton abstraction over automaton prod-uct. As an illustration we present a simple example. Suppose we have Σ1 = {τ, a, µ}
and Σ2= {τ, b, c, µ}. Let G1 ∈ φ(Σ1) and G2 ∈ φ(Σ2) be shown in Figure 3. Suppose
Figure 3: Example 3: G1 and G2
we pick Σ′
= {τ , a, b, µ} ⊇ Σ1∩ Σ2. The results of G1× G2 and (G1× G2)/ ≈Σ′ are
depicted in Figure 4, and G1/ ≈Σ1∩Σ′, G2/ ≈Σ2∩Σ′, (G1/ ≈Σ1∩Σ′) × (G2/ ≈Σ2∩Σ′) are in
Figure 4: Example 3: G1× G2 and (G1× G2)/ ≈Σ′
Figure 5. Clearly, (G1×G2)/ ≈Σ′⊑ (G1/ ≈Σ1∩Σ′)×(G2/ ≈Σ2∩Σ′). But we can check that
(G1/ ≈Σ1∩Σ′) × (G2/ ≈Σ2∩Σ′) ⊑ (G1× G2)/ ≈Σ′ does not hold. If we set Σ
′
= {τ, a, c}, then Gi (i = 1, 2) is marking aware with respect to Σi∩ Σ′. We can check that, indeed
Figure 5: Example 3: G1/ ≈Σ1∩Σ′, G2/ ≈Σ2∩Σ′ and (G1/ ≈Σ1∩Σ′) × (G2/ ≈Σ2∩Σ′)
Theorem 2.9. Given Σ and Σ′ ⊆ Σ, let G ∈ φ(Σ) and S ∈ φ(Σ′). Then
1. B((G/ ≈Σ′) × S) =∅ ⇒ B(G × S) = ∅
2. G is marking aware w.r.t. Σ′ ⇒ [B((G/ ≈
Σ′) × S) =∅ ⇐⇒ B(G × S) = ∅]
Proof: Let P : Σ∗→ Σ′∗
be the natural projection. B((G/ ≈Σ′) × S) =∅
⇐⇒ B((G/ ≈Σ′) × (S/ ≈Σ′)) =∅ because S/ ≈Σ′∼= S and by Cor. 2.7
⇒ B((G × S)/ ≈Σ′) =∅ because by Prop. 2.8, (G × S)/ ≈Σ′⊑ (G/ ≈Σ′) × (S/ ≈Σ′)
⇒ P (B(G × S)) =∅ by Prop. 2.4 ⇐⇒ B(G × S) =∅
Thus, B((G/ ≈Σ′) × S) =∅ ⇒ B(G × S) = ∅.
Clearly, S is marking aware with respect to Σ′ because S ∈ φ(Σ′). If G is also marking
aware with respect to Σ′, then by Prop. 2.8, we have
B((G × S)/ ≈Σ′) = B((G/ ≈Σ′) × (S/ ≈Σ′)) (1)
Furthermore, G × S is also marking aware with respect to Σ′
because both G and S are marking aware with respect to Σ′
. By Prop. 2.4 we get that
P (B(G × S)) = B((G × S)/ ≈Σ′) (2) Thus we have B((G/ ≈Σ′) × S) =∅ ⇐⇒ B((G/ ≈Σ′) × (S/ ≈Σ′)) =∅ ⇐⇒ B((G × S)/ ≈Σ′) =∅ by Equation 1 ⇐⇒ P (B(G × S)) =∅ by Equation 2 ⇐⇒ B(G × S) =∅
Thus, if G is marking aware with respect to Σ′
, then B((G/ ≈Σ′) × S) =∅ if and only
if B(G × S) =∅.
What Theorem 2.9 says can be interpreted informally as follows: if the abstraction of G is ‘nonconflicting’ with S, i.e. B((G/ ≈Σ′) × S) =∅, then G is ‘nonconflicting’ with S. But
to make the opposite implication true, we need to impose the marking awareness condi-tion. We will see how to use this result in supervisor synthesis shortly. But before that, we need to address some computational issue: if G is very large, e.g. G = G1× · · · × Gn
G/ ≈Σ′? To overcome this difficulty, we propose the following algorithm.
Suppose I = {1, 2, · · · , n} for some n ∈ N. For any J ⊆ I, let ΣJ := ∪j∈JΣj. Let
Σ′ ⊆ ∪ i∈IΣi.
Sequential Abstraction over Product: (SAP) (1) Input of SAP: a collection of automata {Gi|i ∈ I}.
(2) For k = 1, 2, · · · , n, we perform the following computation.
• Set Jk:= {1, 2, · · · , k}, Tk:= ΣJk∩ (ΣI−Jk∪ Σ
′).
• If k = 1 then W1:= G1/ ≈T1
• If k > 1 then Wk := (Wk−1× Gk)/ ≈Tk
(3) Output of SAP: Wn.
To show that SAP fulfils our expectation, we first need some preparations.
Proposition 2.10. (∀Σ′⊆ Σ)(∀G
1, G2∈ φ(Σ)) G1⊑ G2⇒ G1/ ≈Σ′⊑ G2/ ≈Σ′.
Corollary 2.11. (∀Σ′⊆ Σ)(∀G
1, G2∈ φ(Σ)) G1∼= G2⇒ G1/ ≈Σ′∼= G2/ ≈Σ′.
Proof: Use Prop. 2.10 and Def. 2.5, the corollary follows.
Prop. 2.10 and Cor. 2.11 are about nonblocking preserving and equivalence being invari-ant under abstraction.
Proposition 2.12. (∀Σ′′⊆ Σ′ ⊆ Σ)(∀G ∈ φ(Σ)) G/ ≈
Σ′′∼= (G/ ≈Σ′)/ ≈Σ′′.
Prop. 2.12 is about the chain rule of automaton abstraction, which says an automaton abstraction can be replaced by a sequence of automaton abstractions, and the results are nonblocking equivalent to each other. we now introduce the main result about SAP.
Theorem 2.13. Suppose Wn is computed by SAP. Then (×i∈IGi)/ ≈Σ′⊑ Wn.
Proof: We use induction to show that
(∀k : 1 ≤ k ≤ n) (×j∈JkGj)/ ≈Tk⊑ Wk (3)
It is clear that G1/ ≈T1⊑ W1. Suppose Equation (3) is true for k ≤ l ∈N. Then we need
to show that it also holds for k = l + 1. By the procedure,
(×j∈Jl+1Gj)/ ≈Tl+1 ∼= ((×j∈Jl+1Gj)/ ≈Tl∪Σl+1)/ ≈Tl+1 by Prop. 2.12
⊑ (((×j∈JlGj)/ ≈Tl) × Gl+1)/ ≈Tl+1
because Σl+1∩ ΣJl⊆ Tl∪ Σl+1 and Prop. 2.8 and Prop. 2.10
⊑ (Wl× Gl+1)/ ≈Tl+1
by the induction hypothesis and Prop. 2.6 and Prop. 2.10 = Wl+1
Therefore Equation (3) holds for all k, particularly k = n. The proposition follows.
SAP allows us to obtain an abstraction of the entire system G = ×i∈IGi in a sequential
way. Thus, we can avoid computing G explicitly, which may be prohibitively large for systems of industrial size. Next, we discuss how to synthesize a supervisor based on automaton abstractions.
3 Supervisor Synthesis over Nondeterministic
Finite-State Automata
As described in the introduction section, it has been a large volume of work on supervisor synthesis for nondeterministic systems. In this section we will mainly discuss how to ap-ply automaton abstraction in such synthesis. We assume that the plant model G ∈ φ(Σ) is nondeterministic, but the requirement H ∈ φ(∆) with ∆ ⊆ Σ is deterministic. Here H need not necessarily be standardized. Out goal is to synthesize a deterministic supervisor S ∈ φ(Σ′) with ∆ ⊆ Σ′ ⊆ Σ. We first introduce some concepts.
Given G = (X, Σ, ξ, x0, Xm), for each x ∈ X let
EG : X → 2Σ: x 7→ EG(x) := {σ ∈ Σ|ξ(x, σ) 6=∅}
Thus, EG(x) is simply the set of all events allowable at x in G. We now bring the concept
of state controllability. Let Σ = Σc˙∪Σuc, where Σc is the set of controllable events, Σuc
is the set of uncontrollable events and τ ∈ Σuc. Let L(G) := {s ∈ Σ∗|ξ(x0, s) 6=∅}.
Definition 3.1. Let G = (X, Σ, ξ, x0, Xm), Σ′ ⊆ Σ, A = (Y, Σ′, η, y0, Ym) ∈ φ(Σ′) and
P : Σ∗ → Σ′∗ be the natural projection. A is state-controllable with respect to G and
Σuc if
(∀s ∈ L(G × A))(∀x ∈ ξ(x0, s))(∀y ∈ η(y0, P (s))) EG(x) ∩ Σuc∩ Σ′⊆ EA(y)
The concept of state controllability is slightly different from the one used in the literature, e.g. [23], because of the involvement of Σ′. We can check that, A is state controllable
implies that L(G × A)Σuc∩ L(G × A) ⊆ L(G × A). Thus, it is always true that state
controllability implies language controllability described in the RW paradigm. But the reverse statement is not true unless both A and G are deterministic. We now introduce the concept of state observability. Let Σ = Σo˙∪Σuo, where Σo is the set of observable
events and Σuo is the set of unobservable events. Let Po : Σ∗ → Σ∗o be the natural
projection.
Definition 3.2. Let G = (X, Σ, ξ, x0, Xm) ∈ φ(Σ), Σ′ ⊆ Σ and A = (Y, Σ′, η, y0, Ym) ∈
φ(Σ′
). A is state-observable with respect to G and Po if for any s, s′ ∈ L(G × A) with
Po(s) = Po(s′), we have
(∀(x, y) ∈ ξ×η((x0, y0), s))(∀(x′, y′) ∈ ξ×η((x0, y0), s′)) EG×A(x, y)∩EG(x′)∩Σ′ ⊆ EA(y′)
State observability defined in Def. 3.2 is similar to the one defined in [27], except that in [27] the authors consider A to be a sub-automaton of G and only the silence event is unobservable. Def. 3.2 says that, if A is state observable then for any two states (x, y) and (x′, y′) in G × A reachable by two strings s and s′ having the same projected image
(i.e. P (s) = P (s′)), any event σ allowed at (x, y) and x′ must be allowed at y′ as well.
We can check that, if A is state-observable then (∀s, s′
∈ L(G×A))(∀σ ∈ Σ) Po(s) = Po(s′) ∧ sσ ∈ L(G×A) ∧ s′σ ∈ L(G) ⇒ s′σ ∈ L(G×A)
Thus, state observability implies language observability. But the reverse statement is not always true unless both A and G are deterministic. Notice that, if Σo= Σ, namely
every event is observable, A may still not be state-observable, owing to nondeterminism. In many applications we are interested in an even stronger observability property called state normality which is defined as follows.
Definition 3.3. Let G = (X, Σ, ξ, x0, Xm) ∈ φ(Σ), Σ′⊆ Σ, A = (Y, Σ′, η, y0, Ym) ∈ φ(Σ′)
and P : Σ∗→ Σ′∗be the natural projection. A is state-normal with respect to G and P o
if for any s ∈ L(G × A) and s′ ∈ P−1
o (Po(s)) ∩ L(G × A), we have
(∀(x, y) ∈ ξ×η((x0, y0), s′))(∀s′′∈ Σ∗) Po(s′s′′) = Po(s) ⇒ [ξ(x, s′′) 6=∅ ⇒ η(y, P (s′′)) 6=∅]
We can check that, if A is state-normal with respect to G and Po, then
L(G) ∩ Po−1(Po(L(G × A))) ⊆ L(G × A)
which means L(G × A) is language normal with respect to L(G) and Po. The reverse
statement is not true unless both A and G are deterministic. Furthermore, we can check that state normality implies state observability. But the reverse statement is not true. We now introduce the concept of supervisors.
Definition 3.4. Given G = (X, Σ, ξ, x0, Xm) ∈ φ(Σ) and Σ′ ⊆ Σ, a deterministic
finite-state automaton S = (Y, Σ′
, η, y0, Ym) ∈ φ(Σ′) is a nonblocking controllable
state-observable (or state-normal) supervisor of G with respect to a specification H ∈ φ(∆) with ∆ ⊆ Σ′ under P
o, if the following hold:
1. N (G × S) ⊆ N (G × H) 2. B(G × S) =∅
3. S is state-controllable with respect to G and Σuc
4. S is state-observable (or state-normal) with respect to G and Po
The first condition of Def. 3.4 says that the closed-loop behavior (CLB) satisfies the specification H and the second one says CLB must be nonblocking. The third and fourth ones are self-explanatory.
Proposition 3.5. Given G ∈ φ(Σ) and H ∈ φ(∆) with ∆ ⊆ Σ′ ⊆ Σ, there exists a
nonblocking state-controllable state-observable (or state-normal) supervisor S ∈ φ(Σ′
) of G with respect to H if and only if there exists an automaton A ∈ φ(Σ′
1. N (G × A) ⊆ N (G × H) 2. B(G × A) =∅
3. A is state-controllable with respect to G and Σuc
4. A is state-observable (or state-normal) with respect to G and Po
Prop. 3.5 is about the existence of a nonblocking state-controllable state-observable (or state-normal) supervisor, whose proof indicates that such a supervisor is simply a (canonical) recognizer of an automaton A such that those four conditions hold. In the language-based framework we know that controllability and normality are closed under language union. The following result shows that state controllability and state normality bear a similar feature.
Proposition 3.6. Given G ∈ φ(Σ) and ∆ ⊆ Σ′ ⊆ Σ, let S
i∈ φ(Σ′) (i = 1, 2) be a
non-blocking state-controllable state-normal supervisor of G w.r.t. H ∈ φ(∆). Let S ∈ φ(Σ′)
be a deterministic automaton such that N (S) = N (S1) ∪ N (S2) and L(S) = N (S). Then
S is a nonblocking state-controllable state-normal supervisor of G w.r.t. H.
Prop. 3.6 says that the ‘union’ of two nonblocking state-controllable state-normal (NSCSN) supervisors is still a NSCSN supervisor. We define a set
CN (G, H) := {S ∈ φ(Σ′
)|S is a NSCSN supervisor of G w.r.t. H ∧ L(S) ⊆ L(G)} From Prop. 3.6 we can derive that CN (G, H) has a unique element ˆS such that for any S ∈ CN (G, H), we have N (S) ⊆ N ( ˆS). We call ˆS the supremal nonblocking state-controllable state-normal supervisor of G with respect to H. In reality we will be inter-ested in such a supremal NSCSN supervisor, which can be computed from G × H. In this paper we will not discuss this issue. Instead, we will only focus on two questions: (1) under what conditions is a nonblocking supervisor for an abstract plant model G/ ≈Σ′
with Σ′⊆ Σ also a nonblocking supervisor for the original plant model G? (2) under what
conditions is a nonblocking supervisor S ∈ φ(Σ′
) for G also a nonblocking supervisor for the abstract model G/ ≈Σ′? To answer these questions, we need the following results.
Lemma 3.7. Given G ∈ φ(Σ) and Σ′ ⊆ Σ, let S ∈ φ(Σ′). Then S is state-controllable
with respect to G/ ≈Σ′ and Σuc∩ Σ′ if and only if S is state-controllable with respect to
G and Σuc.
Lemma 3.8. Given G ∈ φ(Σ) and Σ′
⊆ Σ, let S ∈ φ(Σ′ ) and P′ o : Σ ′∗ → (Σ′ ∩ Σo)∗ be
the natural projection. Then (1) If S is state-observable w.r.t. G/ ≈Σ′ and Po′ then S is
state-observable w.r.t. G and Po. (2) If Σo⊆ Σ′ and S is state-observable w.r.t. G and
Po, then S is state-observable w.r.t. G/ ≈Σ′ and Po.
Lemma 3.9. Given G ∈ φ(Σ) and Σ′ ⊆ Σ, let S ∈ φ(Σ′
) and P′ o : Σ
′∗ → (Σ′∩ Σ o)∗
be the natural projection. Then (1) If S is state-normal w.r.t. G/ ≈Σ′ and Po′, then S
is state-normal w.r.t. G and Po. (2) If Σo ⊆ Σ′ and S is state-normal w.r.t. G and Po,
Theorem 3.10. Given G ∈ φ(Σ) and a deterministic automaton H ∈ φ(∆) with ∆ ⊆ Σ′ ⊆ Σ, if there exists a nonblocking state-controllable state-observable (or state-normal)
supervisor S ∈ φ(Σ′) for G/ ≈
Σ′ with respect to H, then S is also a nonblocking
state-controllable state-observable (or state-normal) supervisor for G with respect to H.
Proof: Since S is a nonblocking state-controllable state-observable (or state-normal) su-pervisor of G/ ≈Σ′ with respect to H, by Def. 3.4,
1. N ((G/ ≈Σ′) × S) ⊆ N ((G/ ≈Σ′) × H)
2. B((G/ ≈Σ′) × S) =∅
3. S is state-controllable w.r.t. G/ ≈Σ′ and Σuc∩ Σ′
4. S is state-observable (or state-normal) w.r.t. G/ ≈Σ′ and Po′ : Σ′∗→ (Σo∩ Σ′)∗
By Lemma 3.7, S is state-controllable with respect to G and Σuc. By Lemma 3.8, S
is state observable with respect to G and Po, or by Lemma 3.9, S is state-normal with
respect to G and Po. Since
B((G/ ≈Σ′) × S) =∅
By Theorem 2.9 we get that B(G × S) =∅. Finally, we show that N (G× S) ⊆ N (G× H) as follows. N ((G/ ≈Σ′) × S) ⊆ N ((G/ ≈Σ′) × H) by (1) ⇒ N (G/ ≈Σ′)||N (S) ⊆ N (G/ ≈Σ′)||N (H) ⇒ N (G)||N (G/ ≈Σ′)||N (S) ⊆ N (G)||N (G/ ≈Σ′)||N (H) ⇒ N (G)||P (N (G))||N (S)) ⊆ N (G)||P (N (G))||N (H) by Prop. 2.4 ⇒ N (G)||N (S)) ⊆ N (G)||N (H) because N (G) = N (G)||P (N (G)) ⇒ N (G × S) ⊆ N (G × H)
Therefore, the theorem is true.
Theorem 3.10 says that a nonblocking supervisor S for G/ ≈Σ′ is also a nonblocking
supervisor for G.
Theorem 3.11. Given G ∈ φ(Σ) and a deterministic automaton H ∈ φ(∆) with ∆ ⊆ Σ′ ⊆ Σ, suppose G is marking aware with respect to Σ′ and Σ
o⊆ Σ′. Then a nonblocking
state-controllable state-observable (or state-normal) supervisor S ∈ φ(Σ′) for G with
respect to H and Σo ⊆ Σ′ is also a nonblocking state-controllable state-observable (or
state-normal) supervisor for G/ ≈Σ′ with respect to H.
Proof: Since S is a nonblocking state-controllable state-observable (or state-normal) su-pervisor of G with respect to H, by Def. 3.4,
1. N (G × S) ⊆ N ((G/ ≈Σ′) × H)
2. B(G × S) =∅
3. S is state-controllable with respect to G and Σuc
By Lemma 3.7, S is state-controllable with respect to G/ ≈Σ′ and Σuc∩ Σ′. By Lemma
3.8, S is state-observable with respect to G/ ≈Σ′ and Po′, or by Lemma 3.9, S is
state-normal with respect to G/ ≈Σ′ and Po′. Since B(G × S) = ∅ and G is marking aware
with respect to Σ′, by Theorem 2.9 we get that B((G/ ≈
Σ′) × S) =∅. Finally, we show
that N ((G/ ≈Σ′) × S) ⊆ N ((G/ ≈Σ′) × H) as follows.
N ((G/ ≈Σ′) × S) = N ((G/ ≈Σ′) × (S/ ≈Σ′)) because S/ ≈Σ′∼= S and by Cor. 2.7
= N ((G × S)/ ≈Σ′) by Prop. 2.8
= P (N (G × S)) by Prop. 2.4 ⊆ P (N (G × H)) By (1)
= N ((G × H)/ ≈Σ′) by Prop. 2.4
= N ((G/ ≈Σ′) × (H/ ≈Σ′)) by Prop. 2.8
= N ((G/ ≈Σ′) × H) because H/ ≈Σ′∼= H and by Cor. 2.7
Therefore, the theorem is true.
Theorem 3.11 says that, if G is marking aware with respect to Σ′ and Σ
o⊆ Σ′, then a
nonblocking supervisor of G is also a nonblocking supervisor of G/ ≈Σ′.
4 Example
As an illustration we present the following example. Suppose we have two machines, which are functionally identical, except for individual event labels. The system is de-picted in Figure 6. Each machine Gi (i = 1, 2) has the following standard operations: (1)
Figure 6: Example 4: A Simple Processing Unit
fetching a work piece (ai); (2) preprocessing (bi); (3) postprocessing (ci); (4) polishing
(ei); (5) packaging (di). After preprocessing bi, there are two choices: to be postprocessed
directly (ci) or to be polished first (ei) before postprocessing. The latter gives a product
with better quality. The negative aspect is that polishing may cause the machine G1 to
each alphabet Σi, the controllable alphabet is Σi,c= {ai, ei}, and the observable alphabet
Σi,o= Σi, namely every event is observable (for the purpose of simplicity). There is one
specification H ∈ φ(∆) with ∆ = {e1, e2}, depicted in Figure 7, saying that if a work
Figure 7: Example 4: The Specification H ∈ φ(∆)
piece is polished in G1 (e1), then a work piece must be polished in G2 afterwards (e2).
We now start to synthesize a deterministic nonblocking state-controllable state-normal supervisor that enforces the specification H.
First, we create an appropriate abstraction of G1× G2. We pick Σ′ = {τ, a1, a2, e1, e2}.
The motivation is that, since ∆ ⊆ Σ′
, the abstraction (G1× G2)/ ≈Σ′ can capture the
specification H; and since all controllable events are in Σ′
, the abstraction (G1×G2)/ ≈Σ′
also contains all means of control available to G1× G2itself. Since Σ1∩ Σ2= {τ } ⊆ Σ′,
by Prop. 2.8,
(G1× G2)/ ≈Σ′⊑ (G1/ ≈Σ1∩Σ′) × (G2/ ≈Σ2∩Σ′)
The results of G1/ ≈Σ1∩Σ′ and G2/ ≈Σ2∩Σ′ are depicted in Figure 8. The product of two
Figure 8: Example 4: The Abstractions G1/ ≈Σ1∩Σ′ and G2/ ≈Σ2∩Σ′
abstractions G′
:= (G1/ ≈Σ1∩Σ′) × (G2/ ≈Σ2∩Σ′) is depicted in Figure 9, We now use G
′
and H to synthesize a supervisor. The product G′× H is depicted in Figure 9. Clearly,
the transitions e1 between states (2, 0) and (3, 1), and between states (5, 0) and (4, 1) in
G′× H must be disabled. Otherwise, blocking states (3, 1) and (4, 1) will be reached.
Once these two transitions are disabled, transitions e1between states (2, 0) and (1, 1), and
between states (5, 0) and (6, 1) must be disabled as well because, otherwise, the remaining automaton is not state-normal (and state-observable). After removing transitions e1 at
states (2, 0) and (5, 0) in Figure 9, the remaining reachable part A is depicted in Figure 10, which is nonblocking, state-controllable, state-normal (and state-observable). By Prop. 3.5 we get that, the canonical recognizer S of the marked behavior N (A), depicted in Figure 11, is a nonblocking state-controllable and state-normal supervisor of G′
with respect to H. We can see that S does not allow events e1 and e2 to happen. It is not
difficult to check that S is a nonblocking state-controllable state-normal supervisor of G1× G2 with respect to the specification H, as predicted by Theorem 3.10. We can
Figure 9: Example 4: The Products G′ = (G
1/ ≈Σ1∩Σ′) × (G2/ ≈Σ2∩Σ′) and G
′× H
Figure 10: Example 4: Nonblocking, State-Controllable, State-Observable Automaton A
verify that the maximum number of states of any intermediate automata is 13, which occurs when we compute G′× H. Clearly, abstractions help to reduce the computational
complexity in this example because otherwise we will have to face the product G1×G2×H
directly, which has 61 states.
5 Conclusions
In this paper we first present a new abstraction technique and provide some properties. Then we apply this technique in supervisor synthesis. We consider the problem of synthe-sizing a deterministic supervisor for a nondeterministic plant model and a deterministic specification. After introducing the concepts of state controllability, state observability and state normality, we show that a nonblocking state-controllable state-observable (or state-normal) supervisor of an abstraction G/ ≈Σ′ under a specification H is also a
Figure 11: Example 4: The Supervisor S ∈ φ(Σ′)
plant G under the same specification. The reverse statement is also true, if all observable events are contained in Σ′ and the plant G is marking aware with respect to Σ′. The
concept of marking awareness is used to prevent extra blocking behaviors created from abstraction. Evidence shows that we may replace it by a more relaxed condition, which will be addressed in our future papers. In this paper we present a sufficient and neces-sary condition for the existence of a nonblocking supervisor and show that there exists a supremal nonblocking state-controllable and state-normal supervisor for a plant G and a specification H. The concrete procedure to compute such a supremal supervisor is not provided, owing to the main focus of this paper and the page limit as well. It will be addressed in our future papers.
Appendix
1. Proof of Prop. 2.4: Let ξ′ be the transition map of G/ ≈
Σ′. First we show that
P (B(G)) ⊆ B(G/ ≈Σ′). For each string s ∈ P (B(G)), there exists t ∈ B(G) with
P (t) = s such that
(∃x ∈ ξ(x0, t))(∀t′∈ Σ∗) ξ(x, t′) ∩ Xm=∅
Since G is standardized, P (t) 6= ǫ iff t 6= ǫ. Thus, we get that < x >∈ ξ′(< x
0>, P (t)). Because (∀t′ ∈ Σ∗ ) ξ(x, t′ ) ∩ Xm=∅ we have that (∀s′ ∈ Σ′∗ ) ξ′ (< x >, s′ ) ∩ (Xm/ ≈Σ′) =∅ Thus, s = P (t) ∈ B(G/ ≈Σ′.
To show that P (N (G)) ⊆ N (G/ ≈Σ′), let s ∈ P (N (G)). Then
(∃t ∈ N (G)) P (t) = s ∧ ξ(x0, t) ∩ Xm6=∅
Since G is standardized, ξ(x0, t) ∩ Xm 6=∅ implies ξ′(< x0 >, P (t)) ∩ (Xm/ ≈Σ′) 6=∅.
Thus, s ∈ N (G/ ≈Σ′). To show N (G/ ≈Σ′) ⊆ P (N (G)), let s ∈ N (G/ ≈Σ′). Then we
have
ξ′
(< x0>, s) ∩ (Xm/ ≈Σ′) 6=∅
which means, there exists t ∈ Σ∗ with P (t) = s such that ξ(x
0, t) ∩ Xm 6= ∅. Thus,
t ∈ N (G), namely P (t) = s ∈ P (N (G)). Therefore, we have P (N (G)) = N (G/ ≈Σ′).
Finally, suppose G is marking aware with respect to Σ′. To show B(G/ ≈
Σ′) = P (B(G)),
we only need to show that B(G/ ≈Σ′) ⊆ P (B(G)). For each string s ∈ B(G/ ≈Σ′), we
have
(∃ < x >∈ ξ′
(< x0>, s))(∀s′∈ Σ′∗) ξ′(< x >, s′) ∩ (Xm/ ≈Σ′) =∅
from which we can derive that, there exists t ∈ Σ∗ such that P (t) = s and
x ∈ ξ(x0, t) ∧ (∀t′∈ Σ∗) ξ(x, t′) ∩ Xm6=∅ ⇒ t′∈ (Σ − Σ′)∗
Clearly, x /∈ Xm, because otherwise ξ′(< x >, ǫ) ∩ (Xm/ ≈Σ′) 6=∅. We claim that x is a
blocking state of G. Otherwise, there exists t′∈ Σ∗such that ξ(x, t′) ∩ X
m6=∅. Since G
is marking aware with respect to Σ′
, we have that t′
/
∈ (Σ − Σ′
)∗
, which contradicts the fact that
(∀t′
∈ Σ∗
) ξ(x, t′
Thus, the claim is true, which means t ∈ B(G). Thus, s = P (t) ∈ P (B(G)).
2. Proof of Prop. 2.8: Let Gi= (Xi, Σi, ξi, xi,0, Xi,m) ∈ φ(Σi) with i = 1, 2. For notation
simplicity let ˆΣi = Σi∩ Σ′, and P : (Σ1∪ Σ2)∗→ Σ′∗, Pi : Σ∗i → ˆΣ∗i, ˆPi : Σ′∗→ ˆΣ∗i and
Qi : (Σ1∪ Σ2)∗ → Σi∗ be natural projections, ξ′ the transition map of (G1× G2)/ ≈Σ′
and ξ′
i be the transition map of Gi/ ≈Σˆi (i = 1, 2).
First, we have the following,
N ((G1× G2)/ ≈Σ′) = P (N (G1× G2)) by Prop. 2.4
= P (N (G1)||N (G2))
= P1(N (G1))||P2(N (G2)) because Σ1∩ Σ2⊆ Σ′
= N (G1/ ≈Σˆ1)||N (G2/ ≈Σˆ2) by Prop. 2.4
= N ((G1/ ≈Σˆ1) × (G2/ ≈Σˆ2))
Next, we show that B((G1× G2)/ ≈Σ′) ⊆ B((G1/ ≈Σˆ1) × (G2/ ≈Σˆ2)). Let s ∈ B((G1×
G2)/ ≈Σ′). Then there exists (x1, x2) ∈ X1× X2 such that
< (x1, x2) >Σ′∈ ξ ′ (< (x1,0, x2,0) >Σ′, s) ∧ (∀s ′ ∈ Σ′∗ ) ξ′ (< (x1, x2) >Σ′, s ′ )∩(X1,m×X2,m)/ ≈Σ′=∅
which means (x1, x2) /∈ X1,m× X2,m and there exists t ∈ (Σ1∪ Σ2)∗with P (t) = s such
that
(x1, x2) ∈ ξ1×ξ2((x1,0, x2,0), t) ∧ (∀t′ ∈ Σ∗) ξ1×ξ2((x1, x2), t′)∩(X1,m×X2,m) 6=∅ ⇒ t′ ∈ ((Σ1∪Σ2)−Σ′)∗
Since G1and G2are standardized, from (x1, x2) ∈ ξ1× ξ2((x1,0, x2,0), t) and the fact that
Σ1∩ Σ2⊆ Σ′ we can derive that
(< x1>Σˆ1, < x2>Σˆ2) ∈ ξ′1× ξ ′
2((< x1,0>Σˆ1, < x2,0>Σˆ1), s)
We claim that (< x1 >Σˆ1, < x2 >Σˆ2) is a blocking state of (G1/ ≈Σˆ1) × (G2/ ≈Σˆ2).
Otherwise, there exists s′∈ Σ′∗
such that ξ′ 1× ξ ′ 2((< x1>Σˆ1, < x2>Σˆ2), s′) ∩ (X1,m/ ≈Σˆ1) × (X2,m/ ≈Σˆ2) 6=∅ Since (x1, x2) /∈ X1,m× X2,m, (< x1 >Σˆ1, < x2 >Σˆ2) /∈ (X1,m/ ≈Σˆ1) × (X2,m/ ≈Σˆ2).
Thus, s′6= ǫ, which means there exists t′ ∈ Σ∗ with P (t′) = s′∈ ((Σ/
1∪ Σ2) − Σ′)∗ such
that ξ1× ξ2((x1, x2), t′) ∩ (X1,m× X2,m) 6=∅ - contradict the fact that
(∀t′ ∈ Σ∗
) ξ1× ξ2((x1, x2), t′) ∩ (X1,m× X2,m) 6=∅ ⇒ t′ ∈ ((Σ1∪ Σ2) − Σ′)∗
From the claim we get that s ∈ B((G1/ ≈Σˆ1) × (G2/ ≈Σˆ2)).
Let s ∈ N ((G1× G2)/ ≈Σ′). For any (x1, x2) ∈ X1× X2 with
< (x1, x2) >Σ′∈ ξ ′ (< (x1,0, x2,0) >Σ′, s) we have (∃t ∈ Σ∗ ) P(t) = s ∧ (x1, x2) ∈ ξ((x1,0, x2,0), t)
Since G1 and G2are standardize, if s = ǫ, then t = ǫ, which means (x1, x2) = (x1,0, x2,0).
Clearly, we have the following expression: (< x1,0>Σˆ1, < x2,0>Σˆ2) ∈ ξ′1× ξ
′
2((< x1,0>Σˆ1, < x2,0>Σˆ2), ǫ)
If s 6= ǫ, then by the definition of automaton abstraction and the assumption that Σ1∩
Σ2⊆ Σ′, we get
(< x1>Σˆ1, < x2>Σˆ2) ∈ ξ′1× ξ ′
2((< x1,0>Σˆ1, < x2,0>Σˆ2), s)
Thus, in either case we have
(< x1>Σˆ1, < x2>Σˆ2) ∈ ξ
′ 1× ξ
′
2((< x1,0>Σˆ1, < x2,0>Σˆ2), s)
We now show that
Let s′ ∈ N
(G1/≈Σ1ˆ )×(G2/≈Σ2ˆ )(< x1>Σˆ1, < x2>Σˆ2). If s
′= ǫ, then
(< x1>Σˆ1, < x2>Σˆ2) ∈ (X1,m/ ≈Σˆ1) × (X2,m/ ≈Σˆ2)
from which we get (x1, x2) ∈ X1,m× X2,m. Thus, < (x1, x2) >Σ′∈ (X1,m× X2,m)/ ≈Σ′,
namely ǫ ∈ N(G1×G2)/≈Σ′(< (x1, x2) >Σ′). If s
′ 6= ǫ. Then there exists t′ ∈ Σ∗ with
P (t′) = s′ such that ξ
1× ξ2((x1, x2), t′) ∩ (X1,m× X2,m) 6= ∅. Since P(t′) 6= ǫ, by the
definition of abstraction, we get ξ′(< (x
1, x2) >Σ′, s′) ∩ (X1,m× X2,m)/ ≈Σ′6=∅. Thus,
s′ ∈ N
(G1×G2)/≈Σ′(< (x1, x2) >Σ′). In either case, we have
N(G1/≈Σ1ˆ )×(G2/≈Σ2ˆ )(< x1>Σˆ1, < x2>Σˆ2) ⊆ N(G1×G2)/≈Σ′(< (x1, x2) >Σ′)
Thus, (G1× G2)/ ≈Σ′⊑ (G1/ ≈Σˆ1) × (G2/ ≈Σˆ2).
Suppose Gi (i = 1, 2) is marking aware with respect to Σi∩ Σ′. To show
B((G1/ ≈Σˆ1) × (G2/ ≈Σˆ2)) = B((G1× G2)/ ≈Σ′)
we only need to prove one direction (⊆), because the other direction (⊇) has been proved above. Let s ∈ B((G1/ ≈Σˆ1) × (G2/ ≈Σˆ2)). Then there exists (x1, x2) ∈ X1× X2 such
that (< x1>Σˆ1, < x2>Σˆ2) ∈ ξ ′ 1× ξ ′ 2(< (< x1,0>Σˆ1, < x2,0>Σˆ2), s) (4) and (∀s′ ∈ Σ′∗ ) ξ′ 1× ξ ′ 2((< x1>Σˆ1, < x2>Σˆ2), s ′ ) ∩ ((X1,m/ ≈Σˆ1) × (X2,m/ ≈Σˆ2)) =∅ (5)
From Expression (4) we get that
(∃t ∈ (Σ1∪ Σ2)∗) P (t) = s ∧ (x1, x2) ∈ ξ1× ξ2((x1,0, x2,0), t) (6)
From Expression (5) we get that (x1, x2) /∈ X1,m× X2,m. Since G1 and G2are
standard-ized, from Expression (6) and the fact that Σ1∩ Σ2⊆ Σ′ we have
< (x1, x2) >Σ′∈ ξ′(< (x1,0, x2,0) >Σ′, s)
We claim that < (x1, x2) >Σ′ is a blocking state of (G1× G2)/ ≈Σ′. Otherwise, there
exists s′∈ Σ′∗
such that ξ′
(< (x1, x2) >Σ′, s′) ∩ (X1,m× X2,m)/ ≈Σ′6=∅)
Since (x1, x2) /∈ X1,m× X2,m, we get that < (x1, x2) >Σ′∈ (X/ 1,m× X2,m)/ ≈Σ′. Thus,
s′ 6= ǫ. Furthermore, since G
i (i = 1, 2) is marking aware with respect to Σi∩ Σ′, we
have ˆPi(s′) 6= ǫ. Thus, there exists t′∈ Σ∗ with P (t′) = s′such that ξ1× ξ2((x1, x2), t′) ∩
(X1,m× X2,m) 6=∅. Since ˆPi(s′) 6= ǫ for i = 1, 2 and Σ1∩ Σ2⊆ Σ′, we have
ξ′ 1× ξ ′ 2((< x1>Σˆ1, < x2>Σˆ2), s ′ ) ∩ ((X1,m/ ≈Σˆ1) × (X2,m/ ≈Σˆ2)) 6=∅
which contradicts Expression (5). Thus, the claim is true, namely s ∈ B((G1×G2)/ ≈Σ′).
Let s ∈ N ((G1/ ≈Σˆ1) × (G2/ ≈Σˆ2)). For any (x1, x2) ∈ X1× X2with
(< x1>Σˆ1, < x2>Σˆ2) ∈ ξ ′ 1× ξ ′ 2((< x1,0>Σˆ1, < x2,0>Σˆ2), s) we have (∃t ∈ Σ∗ ) P(t) = s ∧ (x1, x2) ∈ ξ((x1,0, x2,0), t)
Since G1 and G2are standardize, if s = ǫ, then t = ǫ, which means (x1, x2) = (x1,0, x2,0).
Clearly, we have the following expression: < (x1,0, x2,0) >Σ′∈ ξ
′
(< (x1,0, x2,0) >Σ′, ǫ)
If s 6= ǫ, then by the definition of automaton abstraction and the assumption that Σ1∩
Σ2⊆ Σ′, we get
< (x1, x2) >Σ′∈ ξ
′
(< (x1,0, x2,0) >Σ′, s)
Thus, in either case we have
< (x1, x2) >Σ′∈ ξ
′
We now show that N(G1×G2)/≈Σ′(< (x1, x2) >Σ′) ⊆ N(G1/≈Σ1ˆ )×(G2/≈Σ2ˆ )(< x1>Σˆ1, < x2>Σˆ2) Let s′ ∈ N(G1×G2)/≈Σ′(< (x1, x2) >Σ′). If s ′ = ǫ, then < (x1, x2) >Σ′∈ (X1,m× X2,m)/ ≈Σ′
from which we can derive that (x1, x2) ∈ X1,m× X2,m. Thus,
(< x1>Σˆ1, < x2>Σˆ2) ∈ (X1,m/ ≈Σˆ1) × (X2,m/ ≈Σˆ2)
namely ǫ ∈ N(G1/≈Σ1ˆ )×(G2/≈Σ2ˆ )(< x1>Σˆ1, < x2>Σˆ2). If s
′6= ǫ, then
< (x1, x2) >Σ′∈ (X/ 1,m× X2,m)/ ≈Σ′
which means (x1, x2) /∈ (X1,m× X2,m)/ ≈Σ′. Furthermore, there exists t′ ∈ Σ∗ with
P (t′) = s′ such that ξ
1× ξ2((x1, x2), t′) ∩ (X1,m× X2,m) 6=∅. We consider three cases.
Case 1: ˆPi(s′) 6= ǫ (i = 1, 2), namely x1 ∈ X/ 1,m and x2 ∈ X/ 2,m. By the definition of
automaton abstraction, we get that ξ′ 1× ξ ′ 2((< x1>Σˆ1, < x2>Σˆ2), s′) ∩ (X1,m/ ≈Σˆ1) × (X2,m/ ≈Σˆ2) 6=∅ Thus, s′ ∈ N (G1/≈Σ1ˆ )×(G2/≈Σ2ˆ )(< x1 >Σˆ1, < x2 >Σˆ2). Case 2: ˆP1(s ′ ) = ǫ and ˆP2(s′) = s′ 6= ǫ. Since G
1 is marking aware with respect to Σ1∩ Σ′, ˆP1(s′) = ǫ implies that
x1∈ X1,m. Since ˆP2(s′) = s′6= ǫ, we have (∃ < ˆx2>Σˆ2∈ X2,m/ ≈Σˆ2) < ˆx2>Σˆ2∈ ξ ′ 2(< x2>Σˆ2), ˆP2(s ′ )) Thus, (< x1>Σˆ1, < ˆx2>Σˆ2) ∈ ξ ′ 1× ξ ′ 2((< x1>Σˆ1, < x2>Σˆ2), s ′ ) which means s′ ∈ N (G1/≈Σ1ˆ )×(G2/≈Σ2ˆ )(< x1 >Σˆ1, < x2 >Σˆ2). Case 3: ˆP1(s ′) 6= ǫ and ˆ
P2(s′) = ǫ. This case is similar to Case 2. In either case, we have
N(G1×G2)/≈Σ′(< (x1, x2) >Σ′) ⊆ N(G1/≈Σ1ˆ )×(G2/≈Σ2ˆ )(< x1>Σˆ1, < x2>Σˆ2)
Thus, (G1/ ≈Σˆ1) × (G2/ ≈Σˆ2) ⊑ (G1× G2)/ ≈Σ′.
3. Proof of Prop. 2.6: Let Gi= (Xi, Σi, ξi, xi,0, Xi,m) with i = 1, 2, 3, where Σ1= Σ2= Σ
and Σ3= Σ′. Let P : (Σ∪Σ′)∗ → Σ∗and P′: (Σ∪Σ′)∗→ Σ′∗be natural projections. We
first show that N (G1×G3) = N (G2×G3). Clearly, we have N (G1×G3) = N (G1)||N (G3).
Since G1⊑ G2, we have N (G1) = N (G2). Thus, we have
N (G1× G3) = N (G1)||N (G3) = N (G2)||N (G3) = N (G2× G3)
To show that B(G1× G3) ⊆ B(G2× G3), let s ∈ B(G1× G3). By the definition of
automaton product, there exists x1 ∈ X1 such tat x1 ∈ ξ1(x1,0, P (s)). There are two
cases to consider. Case 1: x1 is a blocking state. Then P (s) ∈ B(G1) ⊆ B(G2). Thus,
s ∈ B(G2× G3). Case 2: x1 is a nonblocking state. Since G1 ⊑ G2, there exists
x2 ∈ ξ2(x2,0, P (s)) such that NG1(x1) ⊇ NG2(x2). Since s ∈ B(G1× G3), there exists
x3∈ X3 such that (x1, x3) ∈ ξ1× ξ3((x1,0, x3,0), s) and NG1×G3(x1, x3) =∅. We have
NG2×G3(x2, x3) = NG2(x2)||NG3(x3) ⊆ NG1(x1)||NG3(x3) = NG1×G3(x1, x3) =∅
Thus, (x2, x3) is a blocking state of G2× G3, which means s ∈ B(G2× G3). Therefore,
in either case we have B(G1× G3) ⊆ B(G2× G3).
Finally, follow the argument of Case 2, for any s ∈ (Σ ∪ Σ′
)∗
and (x1, x3) ∈ ξ1×
ξ3((x1,0, x3,0), s), we have (x2, x3) ∈ ξ2× ξ3((x2,0, x3,0), s) such that NG2×G3(x2, x3) ⊆
4. Proof of Prop. 2.10: Let Gi= (Xi, Σ, ξi, xi,0, Xi,m), where i = 1, 2, and P : Σ∗→ Σ′∗
be the natural projection. Since G1⊑ G2, by Prop. 2.4 we have
N (G1/ ≈Σ′) = P (N (G1)) = P (N (G2)) = N (G2/ ≈Σ′)
To show B(G1/ ≈Σ′) ⊆ B(G2/ ≈Σ′), let ξi′ (i = 1, 2) be the transition map of Gi/ ≈Σ′.
For any s ∈ B(G1/ ≈Σ′), there exists x1∈ X1 such that
< x1>∈ ξ1′(< x1,0 >, s) ∧ (∀s′ ∈ Σ′∗) ξ′1(< x1>, s′) ∩ X1,m/ ≈Σ′=∅
which means there exists t ∈ Σ∗ such that
P (t) = s ∧ x1∈ ξ1(x1,0, t) ∧ (∀t′∈ Σ∗) ξ1(x1, t′) ∩ X1,m6=∅ ⇒ t′∈ (Σ − Σ′)∗
Thus, NG1(x1) ⊆ (Σ − Σ
′
)∗
. There are two cases. Case 1: x1 is a blocking state of
G1. Then t ∈ B(G1), which means s = P (t) ∈ P (B(G1)). Since G1 ⊑ G2, we have
B(G1) ⊆ B(G2). Thus, by Prop. 2.4, we have s ∈ P (B(G2)) ⊆ B(G2/ ≈Σ′). Case 2: x1
is a nonblocking state. Thus t ∈ N (G1). Clearly x1∈ X/ 1,m. Since G1⊑ G2, there exists
x2∈ X2 such that x2∈ ξ2(x2,0, t) ∧ NG1(x1) = NG2(x2) ∧ [x1∈ X1,m ⇐⇒ x2∈ X2,m] Since NG1(x1) ⊆ (Σ − Σ ′ )∗ , we have (∀t′ ∈ Σ∗ ) ξ2(x2, t′) ∩ X1,m6=∅ ⇒ t′ ∈ (Σ − Σ′)∗
Since x1∈ X/ 1,m, we have x2∈ X/ 2,m. Thus, by the definition of automaton abstraction,
< x2>∈ ξ′2(< x2,0 >, P (t)) ∧ (∀s′ ∈ Σ′∗) ξ′2(< x2>, s′) ∩ X2,m/ ≈Σ′=∅
which means s = P (t) ∈ B(G2/ ≈Σ′). Thus, in either case we have B(G1/ ≈Σ′) ⊆
B(G2/ ≈Σ′).
Finally, for each s ∈ N (G1/ ≈Σ′), there exists x1∈ X1such that
< x1>∈ ξ1′(< x1,0 >, s) ∧ (∃s′ ∈ Σ′∗) ξ′1(< x1>, s′) ∩ X1,m/ ≈Σ′6=∅
which means there exists t ∈ Σ∗ with P (t) = s such that
x1∈ ξ1′(x1,0, t) ∧ (∃t′ ∈ Σ′∗) P (t′) = s′ ∧ ξ1(x1, t′) ∩ X1,m6=∅
Clearly, t ∈ N (G1). Thus, by G1⊑ G2, we have
(∃x2∈ ξ2(x2,0, t)) NG1(x1) ⊇ NG2(x2) ∧ [x1∈ X1,m ⇐⇒ x2∈ X2,m]
Since G2 is standardized, < x2 >∈ ξ2′(< x2,0 >, s). For any s′ ∈ NG2/≈Σ′(< x2 >), we
have ξ′
2(< x2>, s′) ∩ X2,m/ ≈Σ′6=∅. If s′= ǫ, then x2∈ X2,m, which means x1∈ X1,m,
namely ǫ ∈ NG1/≈Σ′(< x1 >). If s
′ 6= ǫ, then there exists t′ ∈ Σ∗
with P (t′
) = s′
such that ξ2(x2, t′) ∩ X2,m6=∅, which means t′ ∈ NG2(x2) ⊆ NG1(x1). Thus,
ξ′
1(< x1>, s′) ∩ X1,m/ ≈Σ′6=∅
namely s′ ∈ N
G1/≈Σ′(< x1>). Thus, NG2/≈Σ′(x2) ⊆ NG1/≈Σ′(x1).
5. Proof of Prop. 2.12: Let ξ′′the transition map of G/ ≈
Σ′′, and ξ′′′ the transition map
of (G/ ≈Σ′)/ ≈Σ′′. Let P12: Σ∗ → Σ′∗, P13: Σ∗→ Σ′′∗ and P23: Σ′∗→ Σ′′∗ be natural
projections. We first show that G/ ≈Σ′′⊑ (G/ ≈Σ′)/ ≈Σ′′.
By Prop. 2.4 we have
N (G/ ≈Σ′′) = P13(N (G)) = P23(P12(N (G))) = P23(N (G/ ≈Σ′)) = N ((G/ ≈Σ′)/ ≈Σ′′)
We now show B(G/ ≈Σ′′) ⊆ B((G/ ≈Σ′)/ ≈Σ′′). Let s ∈ B(G/ ≈Σ′′). There exists
x ∈ X such that
< x >Σ′′∈ ξ′′(< x0>Σ′′, s) ∧ (∀s′ ∈ Σ′′∗) ξ′′(< x >Σ′′, s′) ∩ Xm/ ≈Σ′′=∅
Thus, there exists t ∈ Σ∗ with P
13(t) = s such that
We have two cases to consider. Case 1: x is a blocking state. Then clearly t ∈ B(G). By Prop. 2.4, P13(t) = s = P23(P12(t)) ∈ P23(P12(B(G))) ⊆ B((G/ ≈Σ′)/ ≈Σ′′). Case 2: x
is a nonblocking state. Clearly x /∈ Xm, which means << x >Σ′>Σ′′∈ (X/ m/ ≈Σ′)/ ≈Σ′′.
Thus, from Expression (7) and the definition of automaton abstraction, we get that << x >Σ′>Σ′′∈ ξ ′′′ (<< x0>Σ′>Σ′′, s) ∧ (∀s ′ ∈ Σ′′∗ ) ξ′′′ (<< x >Σ′>Σ′′, s ′ )∩(Xm/ ≈Σ′)/ ≈Σ′′=∅
Thus, s ∈ B((G/ ≈Σ′)/ ≈Σ′′). In either case we have B(G/ ≈Σ′′) ⊆ B((G/ ≈Σ′)/ ≈Σ′′).
Let s ∈ N (G/ ≈Σ′′). For any x ∈ X with < x >Σ′′∈ ξ′′(< x0>Σ′′, s), we have that
(∃t ∈ Σ∗
) P13(t) = s ∧ x ∈ ξ(x0, t)
Since G is standardize, if s = ǫ, then t = ǫ, which means x = x0. Clearly, we have
the following expression: << x0 >Σ′>Σ′′∈ ξ′′′(<< x0>Σ′>Σ′′, ǫ). If s 6= ǫ, then by the
definition of automaton abstraction and the assumption that Σ′′⊆ Σ′ ⊆ Σ, we get that
<< x >Σ′>Σ′′∈ ξ′′′(<< x0 >Σ′>Σ′′, s). Thus, in either case we have << x >Σ′>Σ′′∈
ξ′′′
(<< x0>Σ′>Σ′′, s). We now show that
N(G/≈Σ′)/≈Σ′′(<< x >Σ′>Σ′′) ⊆ NG/≈Σ′′(< x >Σ′′)
Let s′ ∈ N
(G/≈Σ′)/≈Σ′′(<< x >Σ′>Σ′′). If s
′
= ǫ, then << x >Σ′>Σ′′∈ (Xm/ ≈Σ′)/ ≈Σ′′,
which means x ∈ Xm. Thus, < x >Σ′′∈ Xm/ ≈Σ′′, namely ǫ ∈ NG/≈
Σ′′(< x >Σ′′). If
s′ 6= ǫ. Then there exists t′ ∈ Σ∗
with P13(t′) = s′ such that ξ(x, t′) ∩ Xm 6=∅. Since
P13(t′) 6= ǫ, by the definition of automaton abstraction,
ξ′′
(< x >Σ′′, s
′
) ∩ Xm/ ≈Σ′′6=∅
Thus, s′ ∈ N
G/≈Σ′′(< x >Σ′′). In either case, we get
N(G/≈Σ′)/≈Σ′′(<< x >Σ′>Σ′′) ⊆ NG/≈Σ′′(< x >Σ′′)
Next, we show that (G/ ≈Σ′)/ ≈Σ′′⊑ G/ ≈Σ′′. We first show
B((G/ ≈Σ′)/ ≈Σ′′) ⊆ B(G/ ≈Σ′′)
Let s ∈ B((G/ ≈Σ′)/ ≈Σ′′). There exists x ∈ X such that
<< x >Σ′>Σ′′∈ ξ ′′′ (<< x0>Σ′>Σ′′, s) ∧ (∀s ′ ∈ Σ′′∗ ) ξ′′′ (<< x >Σ′>Σ′′, s ′ )∩(Xm/ ≈Σ′)/ ≈Σ′′=∅
Thus, there exists t ∈ Σ∗ with P
13(t) = s such that
x ∈ ξ(x0, t) ∧ (∀t′∈ Σ∗) ξ(x, t′) ∩ Xm6=∅ ⇒ t′∈ (Σ − Σ′′)∗ (8)
We have two cases to consider. Case 1: x is a blocking state. Then clearly t ∈ B(G). By Prop. 2.4, P13(t) = s ∈ P13(B(G)) ⊆ B(G/ ≈Σ′′). Case 2: x is a nonblocking state.
Clearly x /∈ Xm, which means < x >Σ′′∈ X/ m/ ≈Σ′′. Thus, from Expression (8) we get
that < x >Σ′′∈ ξ ′′ (< x0>Σ′′, s) ∧ (∀s ′ ∈ Σ′′∗ ) ξ′′ (< x >Σ′′, s ′ ) ∩ Xm/ ≈Σ′′=∅
Thus, s ∈ B(G/ ≈Σ′′). In either case we have B((G/ ≈Σ′)/ ≈Σ′′) ⊆ B(G/ ≈Σ′′).
Let s ∈ N ((G/ ≈Σ′)/ ≈Σ′′). For any x ∈ X with << x >Σ′>Σ′′∈ ξ′′′(<< x0>Σ′>Σ′′, s),
we have
(∃t ∈ Σ∗
) P13(t) = s ∧ x ∈ ξ(x0, t)
Since G is standardize, if s = ǫ, then t = ǫ, which means x = x0. Clearly, we have the
following expression: < x0 >Σ′′∈ ξ′′(< x0 >Σ′′, ǫ). If s 6= ǫ, then by the definition of
automaton abstraction and the assumption that Σ′′⊆ Σ′ ⊆ Σ, we get that
< x >Σ′′∈ ξ
′′
(< x0>Σ′′, s)
Thus, in either case we have < x >Σ′′∈ ξ′′(< x0>Σ′′, s). We now show that
NG/≈Σ′′(< x >Σ′′) ⊆ N(G/≈Σ′)/≈Σ′′(<< x >Σ′>Σ′′)
Let s′ ∈ N
G/≈Σ′′(< x >Σ′′). If s
′
= ǫ, then < x >Σ′′∈ Xm/ ≈Σ′′, which means x ∈ Xm.
Thus, << x >Σ′>Σ′′∈ (Xm/ ≈Σ′)/ ≈Σ′′, namely ǫ ∈ N(G/≈
s′ 6= ǫ. Then there exists t′ ∈ Σ∗ with P
13(t′) = s′ such that ξ(x, t′) ∩ Xm 6=∅. Since
P13(t′) 6= ǫ, by the definition of automaton abstraction, we get that
ξ′′′
(<< x >Σ′>Σ′′, s
′
) ∩ (Xm/ ≈Σ′)/ ≈Σ′′6=∅
Thus, s′
∈ N(G/≈Σ′)/≈Σ′′(<< x >Σ′>Σ′′). In either case, we have
NG/≈Σ′′(< x >Σ′′) ⊆ N(G/≈Σ′)/≈Σ′′(<< x >Σ′>Σ′′)
The proposition follows.
6. Proof of Prop. 3.5: The ONLY IF part is obvious. So we only need to show the IF part. Let S be a (canonical) recognizer of N (A), i.e. N (S) = N (A) and L(S) = N (A) = L(A) (because B(G × A) =∅). Then we have
N (G × S) = N (G)||N (S) = N (G)||N (A) = N (G × A) ⊆ N (G × H)
Next, we show B(G × S) = ∅. Let G = (X, Σ, ξ, x0, Xm), A = (Z, Σ′, δi, z0, Zm) and
S = (Y, Σ′, η, y
0, Ym). Suppose B(G × S) 6=∅. Then there exists s ∈ B(G × S) such that
(∃(x, y) ∈ ξ × η((x0, y0), s))(∀s′∈ Σ∗) ξ × η((x, y), s′) ∩ (Xm× Ym) =∅
Let P : Σ∗ → Σ′∗ be the natural projection. Then P (s) ∈ L(S) = N (A). Then there
exists z ∈ δ(z0, P (s)), namely (x, z) ∈ ξ × δ((x0, z0), s). Since B(G × A) =∅, we get that
(∃s′
∈ Σ∗
) ξ × δ((x, z), s′
) ∩ (Xm× Zm) 6=∅
Thus, ξ(x, s′) ∩ X
m 6= ∅ and P (ss′) ∈ N (A) = N (S). Since S is deterministic,
η(y, P (s′)) ∩ Y
m 6= ∅. Therefore, ξ × η((x, y), s′) ∩ (Xm× Ym) 6= ∅ - contradicting
the fact that (x, y) is a blocking state. Thus, B(G × S) =∅.
For each s ∈ L(G × S), let x ∈ ξ(x0, s) and y ∈ η(y0, P (s)). Since A is state-controllable,
for any z ∈ δ(z0, P (s)), we have
EG(x) ∩ Σuc∩ Σ′ ⊆ EA(z)
Since ES(y) = ∪z∈δ(z0,P (s))EA(z), we have
EG(x) ∩ Σuc∩ Σ′⊆ ES(y)
Thus, S is state controllable with respect to G and Σuc.
Next, we show that S is state-observable w.r.t. G and Po if A is state-observable w.r.t.
G and Po. Suppose it is not true. Then there exist s, s′ ∈ L(G × S) ⊆ L(A) with
Po(s) = Po(s′), (x, y) ∈ ξ × η((x0, y0), s) and (x′, y′) ∈ ξ × η((x0, y0), s′) such that
EG×S(x, y) ∩ EG(x′) ∩ Σ′ * ES(y′)
Since S is deterministic, we have that (∃σ ∈ Σ′
) sσ ∈ L(G) ∧ s′
σ ∈ L(G) ∧ P (s)σ ∈ L(S) ∧ P (s′
)σ /∈ L(S) Since L(S) = L(A), we have that there exist s, s′ ∈ L(A) with P
o(s) = Po(s′) such that
sσ ∈ L(G) ∧ s′
σ ∈ L(G) ∧ P (s)σ ∈ L(A) ∧ P (s′
)σ /∈ L(A)
Pick z ∈ δ(z0, P (s) and z′ ∈ δ(z0, P (s′)), then (x, z) ∈ ξ × δ((x0, z0), s) and (x′, z′) ∈
ξ×δ((x0, z0), s′). Furthermore, we have that σ ∈ EG×A(x, z)∩EG(x′)∩Σ′but σ /∈ EA(z′),
namely
EG×A(x, z) ∩ EG(x′) ∩ Σ′ * EA(z′)
which contradicts that A is state-observable w.r.t. G and Po. Thus, S is state-observable
w.r.t. G and Po.
Finally, we show that S is state-normal w.r.t. G and Po if A is state-normal w.r.t. G and
Po. Let s ∈ L(G × S) and s′∈ Po−1(Po(s)) ∩ L(G × S). For any (x, y) ∈ ξ × η((x0, y0), s′)
and s′′∈ Σ∗ with P
o(s′s′′) = Po(s), we need to show that
ξ(x, s′′
) 6=∅ ⇒ η(y, P (s′′