Tải bản đầy đủ (.doc) (22 trang)

RBF Neurals Networks and a new algorithm for training RBF networks

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (270.76 KB, 22 trang )

1
Content
1. Summary
2. Introduction about function regression
3. RBF Neurals Networks and a new algorithm for training RBF networks
4. Experiment
5. Conclusion
2
1. SUMMARY
Gaussian radial basis function (RBF) networks are commonly used for
interpolating multivariable functions. However, the way of choosing number of
neurals in hidden layer and choosing the appropriate center of the RBFs to have a
good interpolating networks is still open and attracted interest of researchers.This
report proposes using equally spaced nodes as centers in hidden layer. After that,
using k-nearest neighbour regression to interpolating function in the center and using
a new algorithm to training RBF networks. results show that the generality of
networks trained by this new algorithm is sensibly improved and the running time
significantly reduced, especially when the number of nodes is large.
2. REGRESSTION FUNCTION
2.1.1 Introductions about regressing
A set D in R
n
and f: D (⊂R
n
)→R
m
is a multivariable function in D. We only know
a set T in D including N vectors: x
1
,x
2


….x
N
is f(x
i
) = y
i
with i=1,2…,N and we must find
f(x) of another x in D (x= x
1
,…,x
n
).
We find a function
)(x
ϕ
in D such that:
ϕ
(x
i
)

y
i
,

i=1,…N.
(1.)
And using
ϕ
(x) instead of f(x). When m >1, the interpolation problem is equal with m

problems of interpolating m functions of real multivariable. Therefore, we only need
working with m =1.
2.1.2 K-nearest neighbour (k-NN) regression
In this method, people choose a certain natural number k. Each
Dx

, x= x
1
,…,x
n
we find
)(x
ϕ
through f at k nearest nodes of x as follow:
Denoting z
1
,…,z
k
is k vectors in T which are nearest wit x (with d(u,v) is the
distance of u,v in D), and then
)(x
ϕ
is defined:
0
1
( )
n
j j
j
x x

ϕ ρ ρ
=
= +

(2)
which
i
ρ
is defined so that the sum of least square in the set z
1
,…,z
k
is smallest.
3
Tức là:
( )
2
2
0
1 1 1
1 1
( ) ( ) ( )
2 2
k k n
i i i i
j j
i i j
z f z z f z
ϕ ρ ρ
= = =

 
Σ = − = + −
 ÷
 
∑ ∑ ∑
smallest. We find the
parameters
i
ρ
by the system of equations:
That means:
(3)
And
(4)
Solving the system (3,4), with each x we define a responsible parameters Pt and t to get
( )x
ϕ
as (2).
2.2 THE IDEA AND SOLUTION OF INTERPOLATING APPOROXIMATLY
PROBLEM WITH WHITE NOISE DATA
With training in equally spaced nodes, the HDH-1 phase can be apply in many
applications which needs fast training time such as in computer graphics, pattern
recognition. To get the maximum of advantage, Hoàng Xuân Huấn suggested an idea to
use HDH-1 phase in solving the intepolating problem with noise and unequally spaced
data. The idea is:
Step 1: Base on the unequally spaced nodes and its measured value with white
noise, using regression method, we create a new set of data with equally spaced nodes in
a given web defined in the range of parameters of original unequally spaced nodes. Each
value of new equally spaced nodes is noise reduction.
Step 2: Using HDH-1 phase to training the RBF networks in the new data, we get

a network which does not only interpolate appoximately function, but it also reduces the
noise.
4
Figure 1: The web nodes base on original values of unequally spaced nodes
The figure above describe the case of 2 dimensions data, the web of new equally
spaced nodes (the red circles) which based on the range of original values of original
nodes (the blue triangles). The value of each node (circles) is computed by using
regression based on the values of k nearest original nodes (triangles). RBF networks will
be training by HDH-1phase algorithm with the input data is new equally spaced nodes
(circles) and the reduction value of each node.
5
2.3 The approximately multivariable function problem
Approximating multivariable function problem is considered as a common,
general problem, the interpolation aspect is a special situation. In the interpolating
problem, the interpolating function must have the same value with the value of given
nodes. When the number of nodes is large, defining the interpolating function
ϕ
become
more complex, and we will accept the approximate value at each given nodes and
choosing a simple function such that the error is best
The given problem:
Function
)(xfy =
measured in
{ }
N
k
k
x
1=

belong to D in
n
R
is
Nkxfy
kk
1);( =∀=

with
Dxxx
k
n
kk
∈= ), ,(
1
and
mk
Ry ∈
.
To approximate
)(xf
we needs a function with given form such that the error in
each node is as good as possible.
The chosen function is usually
), ,,,()(
21 k
cccxx Φ=
ϕ
and the error is often
defined following paramters

k
ccc , ,,
21
with least square method

=

N
i
ii
yx
1
2
)(
ϕ
with

=
−=−
m
j
i
j
i
j
ii
yxyx
1
2
))(()(

ϕϕ
.
Therefore,
)(x
ϕ
is considerd as the best approximate function of
)(xf
with leas
square method. The graph of function
)(xy
ϕ
=
does not needs go through every nodes as
in interpolation.
In many cases, this problem will be in local minimum. To avoid it, people
usually use loop method with re-assigned parameters in each loop to get the global least
square.
6
In some case, the number of nodes (N) is huge, to reduce the computation,
instead of computing with
Ni 1=
people can use with
Mi 1=
and M<N to compute

=

M
i
ii

zfz
1
2
)()(
ϕ
least, with set
{ }
M
k
k
z
1=
is the nearest of x. This method is local method
and function
), ,,,(
21 k
cccxΦ
is chosen as linear function.
3. RBF NETWORKS AND QHDH TRAINING ALGORITHM
RBF networks are networks with 3 layers (2 neural layers). Each neural in hidden
layer is a non-linear function of distance from input vector
X
and center vector
j
C

combined with neural j with basic radial
j
σ
. The combination of input vector

X
and
center vector
j
C
creates a matrix of distance functions with responsible radial. People
will use this matrix to compute each weighting parameters of neurals in networks.
3.1 Radial Basis Function
3.1.1 Interpolating multivariable function problem with RBF approach
Considering the multivariable function
mn
RRDf
→⊂
)(:
given by nodes
{ }
N
k
kk
yx
1
,
=

);(
mknk
RyRx ∈∈
such that
( )
Nkyxf

kk
, ,1;
==
. We need to find function
ϕ

with given form that satisfies:
0
1
),()( wvxhwx
M
k
k
k
k
+−=

=
σϕ
with
Nkyx
kk
, ,1;)( =∀=
ϕ
(3.1)
With
{ }
N
k
k

x
1=
is a set of n-dimension vectors (so-called interpolating nodes) and
)(
kk
xfy =
is the measured value of function f which need to be interpolated; real
function
),(
k
k
vxh
σ

is called Radial Basic Function (RBF) with center v
k
,
radial
k
σ

and
)( NM ≤
is the number of radial basic function using to define f ; w
k
an
k
σ
is the
parameters that needs finding.

3.1.2 Radial Basic Function Technique
7
Considering the interpolating problem with m = 1 and the interpolaing nodes is
not too large we find function
ϕ
as follow:

=
+=
N
k
kk
wxwx
1
0
)()(
ϕϕ
(3.2)
With
)(x
k
ϕ
is radial basic function. There are many diffirent radial basic
functions and the most widely used function is Gauss function. The following formula
introduces the technique with Gauss RBF:
Nkex
k
k
vx
k

, 1)(
2
2
/
=∀=
−−
σ
ϕ
(3.3)
In (3.2) và (3.3) we have:

.
is Euclide distance with
2
1
N
i
i
u u
=
=

• v
k
is center of RBF
k
ϕ
. Centers are the interpolating
k
v

=
k
x

k

, then M =
N (more detail in chapter 3 of [13]).
• The parameters w
k
and
σ
k
need to be found such that
ϕ
satisfies
interpolating conditions (3.1):
i
N
k
i
kk
i
ywxwx =+=

=1
0
)()(
ϕϕ
;

Ni , ,1=∀

(3.4)
With each k, parameter
σ
k
is used to control effective range of RBF
k
ϕ
, when
k
k
vx
σ
3>−
,
( )
k
x
ϕ
is tiny and meaningless. Considering the square matrix NxN:

NNik ×
=Φ )(
,
ϕ
and
2
2
/

,
)(
k
ki
xx
i
kik
ex
σ
ϕϕ
−−
==
(3.5)
With given parameter
σ
k
, Michelli [14] proved
Φ
is reversible matrix and
positive if the x
k
is diffirent. Therefore, with given w
0,
the system of equations (3.4)
always have one root w
1
, …, w
N
.
The sum of error square is defined by the formula (3.6)

8
( )
( )
2
1
N
i i
i
E x y
ϕ
=
= −

(3.6)
The genaral approximation and the best approximation of radial basic functions
is investigated in [22][31][54]. The interpolating function has advantages in sum of error
square E which is always global minimum (page 98 in [13]). From the conclusion above,
people suggests an algorithm to interpolating and approximate function based on sum of
least square or solving the system of equations [49].
3.1.3 Some radial basic functions
Non-linear Radial Basic Function f can be used as some following functions:
Gaussian Function:
2
2
)(
)(
r
cx
exf
−−

=
(
3.7)
With
Rc

is center of RBF with radial is r. The value of RBF Gaussian increase
when x is closer to center as the figure 3.1:
Figure: 3.1. RBF Gaussian with r =1 and c = 0
9
Multiquadric Function:
2/122
))(()( rcxxf +−=
(
3.8)
Figure 3.2. RBF Multiquadric with r =1 and c = 0
Inverse Multiquadric Function:
2/122
))(()( rcxxf +−=
(
3.9)
Figure: 3.3. RBF Inverse Multiquadric with r =1 and c = 0
Cauchy Function:
10
r
rcx
xf
122
))((
)(


+−
=
(3.10)
Figure: 3.4. RBF Cauchy with r =1 and c = 0
3.2 RBF Network structure
RBF neural networks structure is 3 layers (2 neural layers), transposing function
is radial basic. The structrue includes:
i) Input layer with n nodes for the input vector
n
Rx

.
ii) Hidden layer has M neurals; each neurals k has center v
k
. Output of
the neural is responsible RBF
ϕ
k
.
iii) Output layer includes m neurals with output value.
With transposing function in hidden layer is RBF and output of transposing
function is linear.
11
Figure: 3.5 RBF neural networks structure
The figure 3.5 describes the general of RBF networks structures. If with the data
{ }
N
k
k

x
1=
is the set of vectors in n-dimension space,
{ }
N
k
k
y
1=
is responsible expected vector
with input vector x
k
.
W
0
is the threshold of each output neural. We have output of each neural as the
formula (3.11):
r
j
= w
1j
ϕ
1
+ … + w
Mj
ϕ
M
+ w
0j
(3.11)

With each x→ ϕ
1
= f
1
(x,v
1
), …, x → ϕ
M
= f
M
(x,v
M
), and
ϕ
k
= f
k
(x,v
k
) =
2
2
2
||||
k
k
vx
e
σ
−−

, k = 1, …, M
(3.12)
The part j of output vector is calculated as formula (3.13):
z
j
=


=
=
M
k
k
M
k
kkj
w
1
1
ϕ
ϕ
(3.13)
X
1
X
n



X

2

Input layer Hidden layer Output layer
ϕ
1
w
M1
w
11
w
MJ
z
1
ϕ
2
z
j
… …
ϕ
M
w
1J
v
11
v
n1
v
1M
v
nM

w
0J
w
01
y
1
y
J
Expected value
12
z
j
=

=
M
k
kkj
wM
1
)/1(
ϕ
(3.14)
Looking at the formula (3.11) and (3.12) here ϕk is radial basic function, wkj is
connection points of the neuron output layer, x is the input signal of the network, vm is
the center selected the corresponding the radius function. Center vector vm = (v1m,…,
vnm) corresponding with hidden neuron m with n components included in the
input vector, M is the radius function is the number of hidden layer neurons.
When training the network perform the following tasks:
Identify the center vector

Select σm radius parameters accordingly
Creation of connection w
The training of connection points so that the total squared error E the smallest.
3.3 Algorithm for training Interpolation RBF Networks: HDH and QHDH
a)Two-phase Algorithm for training HDH
We denote indentity Matrix of size N as I; W=










N
w
w

1
, Z=











N
z
z

1
are vectors n N-
dimensional space R
N
, in which:
z
k
= y
k
-w
0
,
Nk ≤∀
(9)
and denote
[ ]
NxN
jk
I
,
ψ
=Φ−=Ψ

(10)





≠−
=
=
−−
jkkhie
jkkhi
k
kj
xx
jk
:;
:;0
22
/||||
,
σ
ψ
(11)
Then system of equations (7) is equivalent to the one: W=
Ψ
W +Z
(12)
We set w
0
as average value of y
k

: w
0
=

=
N
k
k
y
N
1
1
(13)
13
With each k

N, the function q
k
of
k
σ
was indentified as:

=
=
N
j
jkk
q
1

,
ψ
(14)
HDH algorithum is occured as follows: thực hiện như sau.
With
With tolerance
ε
and positive constants q,
α
<1 what are given, the algorithum
includes two phase to determine the parameters: σ
k
và W
*
. In the first phase,
we’ll determine
k
σ
for q
k
≤q and being closest to q(it means that if we place
σ
k

k
/α then q
k
>q). Then, matrix criterion
Ψ
corresponds to vector criterion

*
.
(fomula:

=
=
N
j
j
uu
1
*
) and is smaller than q. The experimental solutions of
latter phase are found by the method of simple iteration. The algorithum is
demonstrated in figure 1.
14
Figure 1: Two-phase Algorithm for training HDH
To determine the solution W
*
of the system (10) we operate iterative procedure
as follows:
1- Firstly, create W
0
=Z ;
2- Then work out : W
k+1
=
k

+Z ;

3- If the end condition is not satisfied, then assign W
0
:= W
1
and come back to
step 2 ;
With each N-dimensional vector, we denote stadard

=
=
N
j
j
uu
1
*
, end condition
can be chosen as following Expression :
ε
≤−

*
01
1
WW
q
q
Then, the algorithm always ends after limited number of steps and the
following evaluation is always true :
ε

≤−
*
*1
WW
b) Algorithm QHDH
Proceduce Two-phase Algorithm for training RBF
Networks
for k=1 to N do
Determine σ
k
to q
k
≤q, and if we place σ
k

k
/α , so q
k
>q; // 1
st
phase
Find W
*
using method of simple interative; //2
nd
phase
End
15
If interpolation molds are equidistant, they can be expressed in the form of
multi-index : :

), ,(
1
1
, 2,1 in
n
iinii
xxx =
,
(15)
In which:
kk
ik
k
hikxx .
0
+=
, h
k
(k=1,…,n) are given constants (pitch variation of
variable x
k
), n is the number of dimension , and ik varies from 1 to N
k
(N
k
is
the number of each dimension’s mold).
Instead of the Euclide criterion, we examnie stadard :
Axxx
T

A
=
; with A is a
diagonal matrix formed :





















=
2
2
2
2

1
1
00

0
1
0
0 0
1
n
h
h
h
A
.
Then, expression (4) and (5) can be rewritten as follows :

=
+=
n
NN
ini
iniini
wxwx
,
1, 1
0, 1, 1
1
)()(
ϕϕ

(4’)
In which
2
, 1
2
2
, 1
/
, 1
)(
ini
ini
xx
ini
ex
σ
ϕ
−−
=

(5’)
Matrix
( )
NxN
jnj
ini
, ,1
, ,1
ϕ


is a square matrix of size N : N= N
1
…N
n
.
With
2
, 1
2
2
, 1, ,1
, 1
/
, ,1, ,1
, ,1
)(
ini
inijnj
ini
xx
jnjjnj
ini
ex
σ
ϕϕ
−−
==
(16)
Matrix
Φ−=Ψ I

In which :






=

−−
2
, ,1
2
2
, ,1, ,1
/
, ,1
, ,1
1, ,1, ,1:;0
ini
inijnj
xx
jnj
ini
e
nijnjkhi
σ
(17)
With given positive q<1 , we choose
1

, ,1
6
11
ln
1









−+
=
n
ini
q
σ
(18)
16
then

<≤=
jnj
jnj
iniini
qq
, ,1

, ,1
, ,1, ,1
1
ψ
. And
Ψ
have criterion which is smaller than q.
Subsequently, we apply method of simple interative in 2
nd
phase of the
algorithm which is mentioned in Section 2 to determine points output layer. So
that, one-phase QHDH algorithm has been formed. With given positive
constant q<1, this algorithm can be interpreted as follows :
.
Figure 2 : One-phase interative Algorithm for training RBF NetworksSystem
with equidistant mold
3.4 New function approximation Method.
We come back to the problem of multi-variate f approximation : :D (

R
n
)
→R and set T=
{ }
N
j
jj
yx
1
,

=
which can be caculated in the form : y
j
= f(x
j
) +
j
ε
; j=1, ,N as the expression (1) with m=1. Suppose D is hold on n-dimensional
box B=
[ ]

=
n
i
ii
ba
1
,
, the method to create RBF network to be approximate with
this function is as follows :
Step 1. Choose a natural number k > n and a grid of evenly spaced molds
{ }
M
j
j
z
1=
on B. (Our experience shows that M shoulb be chosen to be larger than
N).

Step 2. Applying the method k-NN mentioned in 2.2 to determine approximate
values of f(z
k
) at the corresponding point of grid : z
k
.
Step 3. Applying QHDH to train RBF networks.
Proceduce One-phase interative Algorithm
for training
Determine σ
i1, ,in
as the fomula (18) ;
Find W
*
using Method of simple Interative (mentioned
in Head 2) ;
End
17
So that, we could have RBF which is approximate f on D. The procedure is
demostrated in Figure 3 :
Figure 3: Algorithm one-phase iterative training RBF network with equidistant markers
4. Experiments Results
We implement experiments comparing the approximation error due to data taken from
the website:
/>Network will train to compare the error. The effectiveness of the algorithm are
compared with the method of Guang-Bin Huang GGAP and colleagues and
demonstrate better than others methods by experiment.
Testing program to programming in Visual Studio C + + 2010 is running on Operating
System Windows 7 7601 32-bits, 2GB RAM, CPU Intel Core2 Duo T7300 2.0 GHz
Proceduce RBF network construction function approximation

Choose k and landmarks equidistant grid
{ }
M
j
j
z
1=
on B,
Calculate the approximate value
{ }
M
j
j
zf
1
)(
=
// by k-
NN method
Training RBF network approximation / / algorithm QHDH
End
18
4.1 The selection of grid size M.
We collected information on the variables using all the block groups in California
from the 1990 Cens us. In this sample a block group on average includes 1425.5
individuals living in a geographically compact area. Naturally, the geographical area
included varies inversely with the population density. We computed distances among
the centroids of each block group as measured in latitude and longitude. We excluded
all the block groups reporting zero entries for the independent and dependent
variables. T he final data contained 20,640 observations on 9 variables. The

dependent variable is ln(median house value).
The experimental results in Table 1 show that:
1) When the grid nodes M is less, significant error larger, grid nodes thicker (M becomes
larger), the error better. When increasing the number of nodes on the grid much
better error not much better.
2) Where there is no noise for better approximation.
19
Table 1. Approximation error of the network with change M when N= 20640, k =
12,
γ
= 0.9
The average error of networks with different size M
M=10 x 10 =
100
M= 20 x 20 =
400
M = 30 x30 =
900
M= 40x 40
=1600
M = 50x50 =
2500
0.2702 0.1654 0.1407 0.1095 0.1036
4.2 The selection of K.
Experiment with real data as in the previous section and M = 900 grid points.
The results in table 2.
Table 2. Approximation error of the network when K changes
Average error of the network with different k
K = 8 K =10 K =12 K =14 K =16
0,2529 0,1905 0.1407 0,1416 0,1733

The experiment show that when k increases, the noise reduction capabilities will
increase but the error increases more that far distant point in calculating the values in
nodes grid can affect the generality of the function.
4.3. Comparison of effective Guang-Bin Huang Networks
Guang-Bin Huang (2005) proposed the method of RBF network training GGAP
approximately multivariate normal distribution with mixed noise, experimental
results show that more effective than other commonly
used methods (MRAN,MAIC). We compared the effects test network with the
network by the method of GGAP. Implement test for real data introduced above.
Samples volume were collected for the case N = 20 640 samples. The number of grid
nodes respectively M = 30x30 = 900, with k values change. The experimental results are
shown in Table 3 below (the error in bold is better results GGAP.
20
Table 3. .
γ
GGAP
kNN-HDH
k=8 k=10 k=12 k=14 k=16
0.4
0.31609
0.5227
0.3215
0.2724
0.2811
0.2989
0.5
0.25101
0.4051
0.2917
0.2507

0.2288
0.2544
0.6
0.24102
0.4297
0.2708
0.2196
0.2127
0.2412
0.7
0.22165
0.2277
0.2622
0.1999
0.2008
0.2107
0.8
0.20711
0.2746
0.2066
0.1562
0.1761
0.1998
0.9
0.19169
0,2529
0,1905
0.1407
0,1416
0,1733

1
0.18905
0.1816
0.1859
0.1109
0.1222
0.1608
Comments: From experimental results show that the error of the network under the
new method proposed k-NN method better than the Guang-Bin Huang GGAP with
increasing k
5. Conclusion
We can see new method combines linear regression and k-NN algorithm for RBF
network training QHDH have a multivariate function approximation network, its
performance is experimentally demonstrated great promise.
In the future we will apply to the real data in the pattern recognition problem to test
the effect of application.
21
REFERENCES
1. R. H. Bartels, J C. Beatty, Brian A. Barsky, An Introduction on Splines for use in
Computer Graphics & Geometric Modeling, Morgan Kaufmann, 1987.
2. E. Blanzieri, “Theoretical Interpretations and Applications of Radial Basis
Function Networks”, Technical Report DIT-03-023, Informatica e
Telecomunicazioni, University of Trento, 2003
3. M.D. Buhmann, Radial Basis Functions: Theory and Implimentations, Cambridge
University Press, 2004.
4. M.J.D.Powell, “Radial basis function approximations to polynomials”, in: Proc.
Numerical analysis 1987, Dundee, UK, 1988, pp. 223-241.
5. D.S. Bromhead and D. Lowe, “Multivariable functional interpolation and
adaptive networks”, Complex Systems, vol. 2, 1988, pp 321-355.
6. J.Park and I.W. Sandberg. “Approximation and radial-basis-function networks”,

Neural Comput., vol. 5,no. 3, 1993, pp 305-316.
7. T. Poggio, F. Girosi, “Networks for approximating and learning”, IEEE Proc. 78
(9) (1990), pp 1481–1497.
8. E.J. Hartman, J.D. Keeler and J.M. Kowalski, “Layered neural networks with
Gaussian hidden units as universal approximations”, Neural Comput, vol. 2, no. 2,
1990, pp. 210-215.
9. C.G. Looney, Pattern recognition using neural networks: Theory and algorithm
for engineers and scientist, Oxford University press, New York, 1997.
10.M. Bortman, M A. Aladjem, Growing and Pruning Method for Radial Basis
Function Networks, IEEE Transactions on Neural Networks, Vol. 20 , No 6, 2009
, p 1039 – 1045
11.J.P F. Sum, Chi-Sing Leung; K.I J Ho, On Objective Function, Regularizer, and
Prediction Error of a Learning Algorithm for Dealing With Multiplicative Weight
Noise, IEEE Transactions on Neural Networks,Vol 20 , No 1, 2009 , p 124 - 138
12.Hoang Xuan Huan, Dang Thi Thu Hien and Huu Tue Huynh, A Novel Two-Phase
Efficient Algorithm for Training Interpolation Radial Basis Function Networks,
Signal Processing, Vol. 87, Issue 11 November 2007, pp. 2708–2717.
13.Hoang Xuan Huan , Dang Thi Thu Hien and Huu Tue Huynh, An efficient
algorithm for training interpolation RBF networks with equally spaced nodes,
IEEE Transaction on neural networks, IEEE Transaction on neural networks, 2011
Jun; Vol.22 N6, 982-988
14.D.T.T. Hien, H.X. Huan and H.T.Huynh, “Multivariate Interpolation using Radial
Basis Function Networks”, IJDMMM, Vol. 1, No 3,2009, pp 291-309
15.S. Haykin, Neural Networks: A Comprehensive Foundation, second ed., Prentice-
Hall Inc, 1999.
16.C. Micchelli, “Interpolation of scattered data: distance matrices and conditionally
positive definite functions”, Constr. Approx. 2 (1986), pp 11–22.
17.M. H. Mousoun, Fundamental of Artificial Neural Networks, MIT Press,
Cambridge, MA, 1995.
22

18. G. E. Fasshauer and Jack G. Zhang, “On Choosing “Optimal” Shape Parameters
for RBF Approximation”, Numerical algorithms, Volume 45, Numbers 1-4, pp
345-368.
19.O.Rudenko, O.Bezsonov, Function Approximation Using Robust Radial Basis
Function Networks, Journal of Intelligent Learning Systems and Applications,
Vol.3 No.1, February 2011, pp 17-25
20. Tomohiro Ando, Sadanori Konishi and Seiya Imoto, Nonlinear regression
modeling via regularized radial basis function network, Journal of Statical
Planning and Inference, Volume 138, Issue 11, 1 November 2008, pp 3616-3633.
21.C.M. Bishop, Pattern Recognition and Machine learning, Springer, 2006
22.Guang-Bin Huang A Generalized Growing and Pruning RBF (GGAP-RBF)
Neural Network for Function Approximation, IEEE TRANSACTIONS ON
NEURAL NETWORKS, VOL. 16, NO. 1, JANUARY 2005

×