Jump to first page
The Generalised Mapping Regressor (GMR) neural
network for inverse
discontinuous problems The Generalised Mapping
Regressor (GMR) neural network for inverse
discontinuous problems
Student : Chuan LU
Promotor : Prof. Sabine Van Huffel Daily Supervisor : Dr. Giansalvo Cirrincione
Mapping
Approximation Problem
Feedforward neural networks are :
universal approximators of nonlinear continuous functions (many-to-one, one-to-one)
they don’t yield multiple solutions
they don’t yield infinite solutions
they don’t approximate mapping discontinuities
Jump to first page
Inverse and
Discontinuous Problems
Mapping : multi-valued, complex structure.
conditional average of the target data
Poor representation of the
mapping by least squares approach (sum-of-squares error function) for feedforward neural networks.
Mapping with discontinuities.
Jump to first page
gating gating network network
Network 1 Network 2 Network 3
input input output output mixture-of-experts
It partitions the solution between several networks. It uses a separate network to determine the parameters of each kernel, with a further network to determine the coefficients.
winner-take-all
• Jacobs and Jordan
• Bishop (ME extension)
kernel blending
Jump to first page
Example #1
ME
MLP
Example #2
ME
MLP
Jump to first page
Example #3
ME
MLP
Example #4
ME
MLP
Jump to first page
Generalised Mapping Regressor
( GMR )
(G. Cirrincione and M. Cirrincione, 1998)
approximate every kind of function or relation.
input : collection of components of x and y output : estimation of the remaining components
output all solutions, mapping branches, equilevel hypersurfaces.
Characteristics :
n
m y
x y x
M( , ):
coarse-to-fine learning
incremental
competitive
based on mapping recovery (curse of dimensionality)
topological neuron linking
distance
direction
linking tracking
branches
contours
open architecture
function approximation pattern recognition Z (augmented) space unsupervised learning
GMR Basic Ideas
clusters mapping branches
Jump to first page
GMR four phases
object merged
Object
Merging
Learning Recall-
ing
branch 1
branch 2 INPUT
INPUT
Linking
links
object 1
pool of neurons
object 2 object 3
Training Training
SetSet
EXIN Segmentation Neural Network (EXIN
SNN)
clustering
(G. Cirrincione, 1998)
x5
x4
4 1 s4 1 s
w4= x4
4 2 s4 2 s
3 1 s3 1 s
2 1 s2 1
s vigilance
threshold
x
1 1 s1 1
s 1
w w w w
s
s w x
w Input/weight space
Z (augmented) space
coarse quantization
• EXIN SNN
• high z ( say 1 )
branch (object) neuron
GMR Learning
Z (augmented) space
• production phase
• Voronoi sets
domain setting
GMR Learning
Z (augmented) space
• secondary EXIN SNNs
• z = 2 < 1
TS#1
TS#2
TS#3
TS#4
TS#5
Other levels are possible
fine quantization
GMR Learning
-0.6 -0.4 -0.2 0 0.2 0.4 0.6 -1
-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8
1 PLN Level 1 1=0.2, epoch1=3
x
-0.6 -0.4 -0.2 0 0.2 0.4 0.6
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8
1 PLN Level 1 1=0.2, epoch1=3
x
GMR Coarse to fine Learning
( Example)
-0.6 -0.4 -0.2 0 0.2 0.4 0.6
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8
1 PLN level 1-2, 1=0.2, epoch1=3; 2=0.1, epoch2=3
* 1st PLN: 13*
x y
* 2nd PLN: 24*
-0.6 -0.4 -0.2 0 0.2 0.4 0.6
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8
1 PLN level 1-2, 1=0.2, epoch1=3; 2=0.1, epoch2=3
* 1st PLN: 13*
x y
* 2nd PLN: 24*
object neuron fine VQ
neurons
object neuron Voronoi set
Jump to first page
GMR Linking
Voronoi set: setup of the neuron radius (domain variable)
neuron i
ri asymmetric radius
Task 1 : Task 1 : Task 1 : Task 1 :
Weight Space
GMR Linking
For one TS presentation:
zi
d1 w1 w5 w3
w4
d1
w2
d5 d3
d4
d2
branch and bound search technique
k-nn
Linking candidates
distance test
direction test
create a link or strengthen a link
Task 2 : Task 2 : Task 2 : Task 2 :
Linking direction
Jump to first page
Branch and Bound Accelerated Linking
neuron tree constructed during learning phase (multilevel EXIN SNN learning)
methods in linking candidate step (k-nearest-neighbors computation):
-BnB : < d1 , ( : linking factor predefined)
k-BnB : k predefined.
44 43
3127
64 59 55
47
76 81 80 83
0,00%
10,00%
20,00%
30,00%
40,00%
50,00%
60,00%
70,00%
80,00%
90,00%
2-D (TS 2k): 8 2-D(TS 4k): 24 3-D (TS 3k): 199 linking flops (x100,000)
percents of linking flops saved by branch and bound
2-level d-BnB 2-level k-BnB 3-level d-BnB 3-level k-BnB
GMR Linking
branch-and-bound in linking experimental results:
83 %
Jump to first page
branch and bound (cont.)
Apply branch and bound in learning phase ( labelling ) :
Tree construction
k-means
EXIN SNN
Experimental results (in the 3-D example)
50% of labeling flops are saved
GMR Linking Example
-0.6 -0.4 -0.2 0 0.2 0.4 0.6
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
x y
Linking: = 2.5
-0.6 -0.4 -0.2 0 0.2 0.4 0.6
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
x y
Linking: = 2.5
link
GMR Merging Example
-0.6 -0.4 -0.2 0 0.2 0.4 0.6
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
x
y
Merging: threshold = 1 Obj: 13 -> 3
-0.6 -0.4 -0.2 0 0.2 0.4 0.6
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
x
y
Merging: threshold = 1 Obj: 13 -> 3
-0.6 -0.4 -0.2 0 0.2 0.4 0.6 -1
-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
x
y
x = 0.2 Level 1 neurons: 3
-0.6 -0.4 -0.2 0 0.2 0.4 0.6
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
x
y
x = 0.2 Level 1 neurons: 3
GMR Recalling Example
04) . 0 01 . ) 0
2 (sin(
4 ) 1
( 2
f x x x
y )
04 . 0 01 . ) 0
2 (sin(
4 ) 1
( 2
f x x x
y
level 1 neuron
level 2 neuron branch 1
branch 2
-0.6 -0.4 -0.2 0 0.2 0.4 0.6
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
x
y
y = 0.6 Level 1 neurons: 1
-0.6 -0.4 -0.2 0 0.2 0.4 0.6
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
x
y
y = 0.6 Level 1 neurons: 1
level one neurons : input within their domain
level two neurons : only connected ones
level zero neurons : isolated (noise)
Experiments
spiral of Archimedes
= a (a = 1) spiral of Archimedes
= a (a = 1)
Experiments
Sparse regions
further normalizing + higher mapping resolution
04) . 0 01 . ) 0
2 (sin(
4 ) 1
( 2
f x x x
y ( ) 14(sin(2 ) 20.010.04) x x
x f
y
Experiments
noisy data
1
Bernoulli of
lemniscate
2 2
2 2 2 2
a y
x a y
x
1
Bernoulli of
lemniscate
2 2
2 2 2 2
a y
x a y
x
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 -1
-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
x
y
Solutions for y = -0.5 Level 1 neurons: 6
Experiments
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 -1
-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
x
y
Solutions for y = -0.1 Level 1 neurons: 10
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 -1
-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
x
y
Solutions for y = 0.5 Level 1 neurons: 5
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 -1
-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
x
y
Solutions for y = 1 Level 1 neurons: 19
3, 5
Lissajous of
curve
sin ,
cos
b a
bt y
at
x
3, 5
Lissajous of
curve
sin ,
cos
b a
bt y
at x
Experiments
contours :
links among level one neurons
GMR mapping of 8 spheres in a 3-D scene.
Conclusi ons
GMR is able to :
solve inverse discontinuous problems
approximate every kind of mapping
yield all the solutions and the corresponding branches GMR can be accelerated by applying tree search techniques
GMR needs :
interpolation techniques
kernels or projection techniques for high dimensional data
adaptive parameters
Jump to first page
Thank you !
(shi-a shi-a)
l1 = 0 b1 = 0 l1 = 0 b1 = 0
l6 = 0 b6 = 0 l6 = 0 b6 = 0
l5 = 0 b5 = 0 l5 = 0 b5 = 0
l2= 0 b2= 0 l2= 0 b2= 0
l3 = 0 b3 = 0 l3 = 0 b3 = 0 l4 = 0
b4 = 0 l4 = 0 b4 = 0 l7 = 0 b7 = 0 l7 = 0 b7 = 0 l8= 0
b8 = 0 l8= 0 b8 = 0
l3 = 2 b3 = 1 l3 = 2 b3 = 1
GMR Recall
input
w1
w2
w3 w7
w8
w4
w5 w6
r1
l1 = 1 b1 = 1 l1 = 1 b1 = 1
linking tracking
restricted distance
level one test
connected neuron : level zero level two
branch the winner branch
GMR Recall
input
w1
w2
w3 w7
w8
l1 = 0 b1 = 0 l1 = 0 b1 = 0
l6 = 0 b6 = 0 l6 = 0 b6 = 0
l5 = 0 b5 = 0 l5 = 0 b5 = 0
l2= 0 b2= 0 l2= 0 b2= 0
l3 = 0 b3 = 0 l3 = 0 b3 = 0 l4 = 0
b4 = 0 l4 = 0 b4 = 0 l7 = 0 b7 = 0 l7 = 0 b7 = 0 l8= 0
b8 = 0 l8= 0 b8 = 0
w4
w5 w6
r2 l1 = 1 b1 = 1 l1 = 1 b1 = 1
l3 = 2 b3 = 1 l3 = 2 b3 = 1
l2= 1 b2= 2 l2= 1 b l22= 2= 1
b2= 1 l2= 1 b2= 1
level one test
linking tracking
branch cross
GMR Recall
l6 = 0 b6 = 0 l6 = 0 b6 = 0 l6 = 2 b6 = 4 l6 = 2 b6 = 4 l6 = 1 b6 = 6 l6 = 1 b6 = 6
input
w1
w2
w3 l1 = 0
b1 = 0 l1 = 0 b1 = 0
l5 = 0 b5 = 0 l5 = 0 b5 = 0
l2= 0 b2= 0 l2= 0 b2= 0
l3 = 0 b3 = 0 l3 = 0 b3 = 0 l4 = 0
b4 = 0 l4 = 0 b4 = 0 l7 = 0 b7 = 0 l7 = 0 b7 = 0 l8= 0
b8 = 0 l8= 0 b8 = 0
w4
w5 w6
l1 = 1 b1 = 1 l1 = 1 b1 = 1
l3 = 2 b3 = 1 l3 = 2 b3 = 1
l2= 1 b2= 2 l2= 1 b2= 2 l2= 1 b2= 1 l2= 1 b2= 1 l4 = 1
b4 = 4 l4 = 1 b4 = 4
l5 = 2 b5 = 4 l5 = 2 b5 = 4 l4 = 1 b4 = 5 l4 = 1 b4 = 5 l4 = 1 b4 = 4 l4 = 1 b4 = 4
… until completion of the candidates
level one neurons : input within their domain
level two neurons : only connected ones
level zero neurons : isolated (noise)
w7 w8
l6 = 1 b6 = 4 l6 = 1 b6 = 4
clipping
Tow Branches
Tow BranchesTwo
Branches Two Branches
GMR Recall
input
w1
w2
w3 w7
w8
l7 = 0 b7 = 0 l7 = 0 b7 = 0 l8= 0
b8 = 0 l8= 0 b8 = 0
w4
w5 w6
Output = weight complements of the level one neurons
Output interpolation
l1 = 1 b1 = 1 l1 = 1 b1 = 1
l3 = 2 b3 = 1 l3 = 2 b3 = 1
l2= 1 b2= 1 l2= 1 b2= 1 l4 = 1
b4 = 4 l4 = 1 b4 = 4
l4 = 1 b4 = 4 l4 = 1 b4 = 4 l6 = 1
b6 = 4 l6 = 1 b6 = 4