Tải bản đầy đủ (.pdf) (27 trang)

Một số phương pháp trích chọn đặc trưng và phát hiện đám cháy qua dữ liệu ảnh.

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (524.92 KB, 27 trang )


MINISTRY OF NATIONAL DEFENC E
MILITARY TECHNICAL ACADEMY




HA DAI DUONG




APPROACHES TO VISUAL FEATURE EXTRACTION
AND FIRE DETECTION BASED ON DIGITAL IMAGES



Majored: Mathematical foundations for Informatics
Code: 62 46 01 10




ABSTRACT OF PHD THESIS OF MATHEMATICS



HA NOI - 2014

THIS THESIS IS COMPLETED AT
MILITARY TECHNICAL ACADEMY - MINISTRY OF NATIONAL DEFENCE






Scientific Supervisor: Assoc. Prof. Dr. Dao Thanh Tinh




Reviewer 1: Assoc. Prof. Dr. Nguyen Duc Nghia

Reviewer 2: Assoc. Prof. Dr. Dang Van Duc

Reviewer 3: Assoc. Prof. Dr. Nguyen Xuan Hoai



The thesis was evaluated by the examination board of the academy by the
decision number / , / / of the Rector of Military
Technical Academy, meeting at Military Technical Academy on
… /… /………



This thesis can be found at:
- Library of Le Quy Don Technical University
- National Library of Vietnam

1
ABSTRACT

Automatic fire detection has been interested for a long time
because fire causes large scale damage to humans and our properties.
Until now, some kinds of automatic detection devices, such as smoke
detectors, flame or radiation detectors, or gas detectors, were
invented. Although these traditional fire detection devices have
proven its usefulness, they have some limitations; they are generally
limited to indoors and require a close proximity to the fire; most of
them can not provide additional information about fire circumstance.
Recently, a new approach to automatic fire detection based on
computer vision has lager attractive from researchers; it offers some
advantages over the traditional detectors and can be used as
complement for existing systems. This technique can detect the fire
from a distance in large open spaces, and give more useful
information about fire circumstance such as size, location, growth
rate of fire, and in particularly it is potential to alarm early.
This research concentrated on early fire detection based on
computer vision. Firstly, some techniques that have been used for in
the literature of automatic fire detection are reviewed. Secondly,
some of visual features of fire region for early fire detection are
examined in detail, which include a model of fire-color pixel, a
model of temporal change detection, a model of textural analysis and
a model of flickering verification; and a novel model of spatial
structure of fire region. Finally, three models of fire detection based
on computer vision at the early state of fire are presented: a model of
early fire detection in general use-case (EVFD), a model of early fire
detection in weak-light environment (EVFD_WLE), and a model of
early fire detection in general use-case using SVM (EVFD_SVM).
CHAPTER 1. INTRODUCTION
1.1 Automated fire detection problems
Automatic fire detection has been interested for a long time

because fire causes large scale damage to humans and our properties.
Heat or thermal detectors are the oldest type of automatic detection
device, originating from the mid-19th century with several types still
in production today. Since then, other kinds of automatic detection
devices, such as smoke detectors, flame or radiation detectors, or gas
detectors, were invented. Although these traditional fire detection

2
devices have proven its usefulness, they have some limitations.
Despite the advances in traditional fire alarm technology over the last
century, losses caused by fire, such as deaths, permanent injuries,
properties and environment damages still increase. In order to
decrease this, timely detection, early fire localization and detection
of fire propagation are essential.
The problem of fire detection based on computer vision was
initialized in early 1990s by Healey G. et al., since then various
approach to this issue have been proposed. However, vision-based
fire detection is not a completely solved problem as in most
computer vision problems. The visual features of flames and smoke
of an uncontrolled fire depend on distance, illumination and burning
materials. In addition, cameras are not color and/or spectral
measurement devices, they have sensors with different algorithms for
color and illumination balancing, and therefore they may produce
different images and video for the same scene. So that most proposed
methods in vision-based fire detection return good results in some
conditions of use-case, and may give bad results in other conditions.
In particularly, existing vision-based fire detection methods are not
adequate attention to alarm early.
1.2 Research objective
For all above reasons the author have studied the topic

“Approaches to visual feature extraction and fire detection based on
digital images” with the main interest in the problem of vision-based
fire detection at the early state of fire. Main question and also be
motivation for this research is can vision-based fire detection give a
fire alarm as soon as possible at the early state of fire?. This thesis
wants to find out the answers for that question in some different use-
case such as general conditions, weak-light environment, camera is
dynamic. The objectives of this research include the following
issues: 1) Firstly, some techniques that have been used for fire
detection based on computer vision are reviewed. 2) Secondly, some
of visual features of fire region such as color, texture, temporal
change, flicker and spatial structure are examined in detail so that
reducing the computational complexity of algorithm. 3) Thirdly,
some models of early fire detection based on computer vision are
developed. The development of each model relies on the analysis of
the use-case such as for buildings and office surveillance, for

3
warehouse with weak light environment, etc. It is also applying
intelligent classification to make the models more suitable and
accurate.
1.3 Contributions
This thesis makes the following contributions:
1. Develop and propose some methods of visual features of fire
region extraction. Develop four new methods of pixel or fire region
segmentation, these include a method of fire-color pixel based on
Bayes classification in RGB space, a method of temporal change
detection, a method of textural analysis and a method of flickering
verification; and propose a novel approach to spatial structure of fire
region by using top and rings features.

2. Propose a model of vision-based fire detection for early fire
detection in general use-case - EVFD. This model is a combination
of temporal change analysis, pixel classification based on fire-color
process, and the flickering verification.
3. Propose a model of vision-based fire detection for early fire
detection in weak-light environment - EVFD_WLE. This proposal is
a combination of pixel classification based on fire-color process and
analysis of spatial structure of fire region; these processes will be
done if the environmental light is weak.
4. Propose a model of vision-based fire detection for early fire
detection in general use-case using SVM - EVFD_SVM. In this
model, the algorithm consists of three main tasks: pixel-based
processing using fire-color process for pixel classification, temporal
change detection, and recover lack pixel; textural features of
potential fire region extraction; and SVM classification for
distinguishing a potential fire region as fire or non-fire object.
1.4 Thesis outline
This thesis is organized as follows: Chapter 1, Introduction,
presents the need of problem of fire detection based on computer
vision, disadvantages of traditional fire detection systems, and
advantages of fire detection based on computer vision. This chapter
also describes problem of research, research question, main
contributions and structure of the thesis. Chapter 2, Fire detection
techniques based on computer vision: A review, review some
techniques that have been used for fire detection based on computer

4
vision. Chapter 3, Visual feature extraction for fire detection,
presents examining in detail some of visual features of fire region for
early fire detection; and then develops four new models of pixel or

fire region segmentation and proposes a novel model of spatial
structure of fire region. Chapter 4, Early fire detection based on
computer vision, presents three models of fire detection based on
computer vision: early fire detection in general use-case, early fire
detection in weak-light environment, and early fire detection in
general use-case using SVM. Chapter 5, Conclusions and
Discussions, states the conclusions, presents the contributions and
summarizes the results obtained throughout the thesis and
recommendations future research of problem.
CHAPTER 2. FIRE DETECTION BASED ON COMPUTER VISION: A REVIEW
2.1 Introduction
Automatic fire detection has been interested for a long time due
to its large scale damage to humans and our properties. Heat or
thermal detectors are the oldest type of automatic detection device
originating from the mid-19th century. Since then, other kind of
automatic detection devices; smoke detectors, flame or radiation
detectors, or gas detectors for examples have been being developed.
Although these devices have proven its usefulness in some
conditions, they have some limitations. They are generally limited to
indoors and require a close proximity to the fire. Most of them can
not provide additional information about fire circumstances and may
take a long time to raise alarm.
Fire detection based on computer vision can be marked by the
research of Healey G. et al. in the early 1990s. Since then, various
approaches to this issue were proposed. The general scheme of fire
detection based on computer vision is a combination of two
components: the analysis of visual features and the classification
techniques. The visual features include color, temporal changes,
spatial variance, texture and flickering. The classification techniques
are used to classify a pixel as fire or as non-fire, or to distinguish a

potential fire region as fire or as non-fire object; these techniques
include Gaussian Mixture Model (GMM), Bayes classification,
Support Vector Machine (SVM), Markov Model and Neural
Network, etc.

5
2.2 Visual features analysis
2.2.1 The chromatic color
Color detection is one of the most important and earlier feature
used in vision-based fire detection. The majority of the color-based
approaches in this trend make use of RGB color space, sometimes in
combination with HSI/HSV color space. Some fire-color models
often use in the literature of vision-based fire detection such as
statistical generated color models, Gaussian Mixture Models
(GMM). Based on the analysis of color of flame in red-yellow rang,
a common type of flame in the real-word, a fire-color model to
segment a pixel is proposed as follows:
1
:
C R
R R

,
2
:
C
R R G and G B
 
,
 

3
: (255 )*
C S R
R S R
  
,
and the fire-color model is defined as:
1 2 3
1 ( ) ( ) ( )
( , )
0
C
if R and R and R
Fire x y
Otherwise





where R, G, B are red, green and blue components of pixel at (x,y)
respectively; S is the saturation component in HSI color space; S
T

and R
T
are two experimental factors. Several other works detect fire-
color pixel using more complex model such as Gaussian mixture
model. In this model, with a given pixel, if its color value is inside
one of distribution then it is considered as a fire-color pixel. Denote

d(r
1
, g
1
, b
1
, r
2
, g
2
, b
2
) is the measurement distance from (r
1
, g
1
, b
1
) to
(r
2
, g
2
, b
2
) in 3-dimensional RGB space. The fire-color model based
on GMM is described as
1 { : ( , , , , , ) 2 }
( , )
, 1 10

0
i i i i
Tr
if i d R G B R G B v
Fire x y
Otherwise
i
  






in which R
i
, G
i
, B
i
are the mean of red, green blue components of
Gaussian distribution i-th; v
i
is its standard deviation.
2.2.2 The temporal changes
Color model alone is not enough to identify fire pixel or fire
region. There are many objects that share the same color as fire. An
important visual feature to distinguish between fire and fire-like
objects is the temporal change of fire. To analyze temporal changes,
it may cause by flame, almost proposals assume that the camera is


6
stationary. A simple approach to estimate the background is to
average the observed image frames of the video. Let I(x,y,n)
represent the intensity value of the pixel at location (x,y) in the n-th
frame, I, background intensity value, B(x,y,n+1) at the same pixel
position is calculated as follows:
( , , ) (1 ) ( , , ) if ( , ) is stationary
( , , 1)
( , , ) if ( , )is moving
aB x y n a I x y n x y
B x y n
B x y n x y
 

 



where B(x,y,n) is the previous estimate of the background intensity
value at the same pixel position. The update parameter a is a positive
real number close to one. Initially, B(x,y,0) is set to the first frame,
I(x,y,0). The pixel at (x,y) is assumed to be moving if
( , , ) ( , , 1) ( , , )I x y n I x y n T x y n
  
where I(x,y,n-1) is the intensity
value of the pixel at (x,y) in the (n-1)-th frame, T(x,y,n) is a
recursively updated threshold at (x,y) of frame n. Other method
usually used to analysis temporal changes is frames difference.
2.2.3 The textural and spatial difference

Flames of an uncontrolled fire have varying colors even within a
small area since spatial color difference analysis focuses on this
characteristic. Using range filters, variance/histogram analysis, or
spatial wavelet analysis, the spatial color variations in pixel values is
analyzed to distinguish between fire and fire-like object. Using
wavelet analysis, Toreyin et al. compute a value, v, to estimate
spatial variations as follows:
2
2 2
,
1
( , ) ( , ) ( , )
lh hl hh
x y
v s x y s x y s x y
M N
  



where s
lh
(x,y) is the low-high sub-image, s
hl
(x,y) is the high-low sub-
image, and s
hh
(x,y) is the high-high sub-image of the wavelet
transform, respectively, and MN is the number of pixels potential
fire region. If the decision parameter, v, exceeds a threshold, then it

is likely that this region under investigation is a fire region. In other
way, Borges et al. use a well-known metric, the variance, to indicate
the amount of coarseness in the pixel values. For a potential fire
region, R, the variance of pixels is computed as
2
( , )
( , ) ) ( ( , )
x y R
c I x y I p I x y

 


in which I(x,y) is intensity of pixel at (x,y), p() is the normalized

7
histogram, and
I
is the mean intensity in R. Therefore, fire is
assumed if the region is with a variance c > λσ, where λσ is
determined from a set of experimental analysis.
2.3 Classification techniques
Some popular approaches to the classification of the multi-
dimensional feature vectors obtained from each candidate flame
region are Bayes classification and SVM classification. Other
classification methods that have been used in the literature of vision-
based fire detection include neural networks, Markov models, etc.
This section introduces two classification methods that used in the
research: Bayes and SVM classification.
2.4 Conclusion

The development of application based on computer vision for
fire detection, which can raise alarm quickly and accurately, is
essential. However, vision-based fire detection is not a completely
solved problem as in most computer vision problems. The visual
features of flames of an uncontrolled fire depend on the distance,
illumination and burning materials. In addition, cameras are not color
and/or spectral measurement devices, they have sensors with
different algorithms for color and illumination balancing, and
therefore they may produce different images and video for the same
scene. For the above reasons, the research of vision-based fire
detection is necessary.
In general, most proposed methods in vision-based fire detection
returns good results in some conditions of use-case, and may give
bad results in other conditions. In particularly, current vision-based
fire detection methods are not adequate attention to alarm early so
that research of vision-based fire detection is necessary, and using
this technique for early fire detection is an important issue.
CHAPTER 3. VISUAL FEATURE EXTRACTION FOR FIRE DETECTION
This chapter presents the examining in detail some of visual
features of fire region for early fire detection; and then develop four
new models of pixel or fire region segmentation, these include a
model of fire-color pixel, a model of temporal change detection, a
model of textural analysis and a model of flickering verification; and
propose a novel model of spatial structure of fire region.

8
3.1 A new approach to color extraction
3.1.1 Chromatic analysis
The model of fire-color is usually used in the first step of the
process and is crucial to the final result. The general idea of most

proposals in the VFD literature is to determine the fire-color model,
Fire(x, y) for pixel at (x,y), and then using that model to build the
potential fire mask, PFM(x,y), as follows:
1 if ( , )
( , )
0 Otherwise
Fire x y
PFM x y





After that the mask PFM is used to analyze the other characteristics
of fire such as temporal changes, deformation of the boundary,
surface statistical parameters, etc. The main drawback of existing
model for fire-color detection is fixed; it returns good results in some
situations and raise bad results in some others. For more flexible, this
study proposes a color model of pixel in fire region using Bayesian
classification; rely on the red (R), green (G), and blue (B)
components a model of fire-color to classify a pixel into two classes,
fire or non-fire pixel is developed.
3.1.2 Classification based on Bayes
For pixel p at (x,y), a vector v = [R, G, B]
T
is considered in terms
of sample for classification problem; in which R, G, and B are red,
green and blue component of p. Let g
1
(v) and g

2
(v) are two
discriminatory functions based on Bayesian classification for fire and
non-fire classes of pixel p; if g
1
(v)>g
2
(v) then p belong to fire class,
otherwise p belong to non-fire class. Denote 
1
is set of fire class
samples, 
2
is set of non-fire class samples, Bayessian
discriminatory functions are defined as follows:
1 1 1 1
( )
T T
vg v v W w cv
  

2 2 2 2
( )
T T
vg v v W w cv
  

in which
1 1
1

1
2
W C

 
,
2 2
1
1
2
W C

 
,
1 1 1
1
w C m


,
2 2 2
1
w C m


,
1 1 1 1 1
1
1
1 1

ln | | ( )
2 2
T
c C mm C P

    
,
2 2 2 2 2
1
2
1 1
ln | | ( )
2 2
T
c C mm C P

   
where m
1
is mean and C
1
is covariance matrix of 
1
and m
2
is mean
and C
2
is covariance matrix of 
2

. Then fire-color model, denote

9
ColorF, is defined:
1 2
1 ( ) ( )
( , )
0
if g v g v
ColorF x y
Otherwise






In this work, three types of fire color, group 1 are images with
red-yellow color of fire caused by paper, wood etc.; group 2 are
images with blue-color fire from gas; and group 3 are images with
red-yellow color of fire in weak-light environment are investigated.
The results of training with prepared samples of group 1:
g
1
= 68e-3*R*R+.11e-2*R*G 44e-3*R*B 82e-3*G*G
+.98e-3*G*B 47e-3*B*B+.16*R 92e-1*G+.33e-1*B-23.
g
2
= 73e-3*R*R+.26e-2*R*G 13e-2*R*B 39e-2*G*G
+.52e-2*G*B 20e-2*B*B+.30e-1*R 98e-2*G+.56e-2*B-12

The results of training with prepared samples of group 2:
g
1
= 42e-3*R*R+.13e-2*R*G 56e-3*R*B 14e-2*G*G+.12e-2*G*B
41e-3*B*B+.46e-1*R 44e-1*G+.63e-1*B-17.
g
2
= 98e-3*R*R+.26e-2*R*G 10e-2*R*B 35e-2*G*G
+.46e-2*G*B 20e-2*B*B+.37e-1*R 16e-1*G 37e-2*B-12
The results of training with prepared samples of group 3:
g
1
= 19e-2*R*R+.38e-2*R*G 10e-2*R*B 40e-2*G*G+.44e-2*G*B
17e-2*B*B+.29*R 16*G+.17e-1*B-28.
g
2
= 43e-2*R*R+.13e-1*R*G 42e-2*R*B 1287e-1*G*G
+.11e-1*G*B 29e-2*B*B 27e-1*R+.16*G 87e-1*B-11
0
50000
100000
150000
200000
250000
Chen Horng Toreyin Celick Borger Color F

Figure 1. The number of misclassified pixels in comparison with ColorF
3.1.3 Experiments
For comparing and testing, the author perform the experiment of
color segmentation with the model proposed by T. Celik et al., the

model proposed by P. V. K. Borges et al., the model of T. H. Chen et
al., the model proposed by W. B. Horng et al., the model proposed
by B. U. Toreyin et al., and the model proposed in this study - denote

10
ColorF. The chart in Figure 1 represents the sum of misclassified
pixels on tested images of each model. The chart shows that the
proposed model - ColorF, and Chen have the lowest of total number
of misclassified pixel.
3.2 A new approach to temporal change, texture and flicker
extraction
3.2.1 Temporal changes analysis
In order to detect temporal changes, which may be caused from
fire, it is necessary to use an effective background-modeling
algorithm. Temporal change caused by fire is usually slow, so that
the existing methods of motion detection, such as background
subtraction or frames difference, are often inefficient. In this
research, temporal change is estimated on regions by regions
between two consecutive frames by using correlation-coefficient.
The correlation-coefficient between two regions A and B is computed
as:

w w w
2 2
2 2
1 1 1 1 1 1
( , ) ( , )* ( , ) ( , ) * ( , )
h h h
x y x y x y
CC A B A x y B x y A x y B x y

     
 

 
 
 
     

in which, w and h are the width and height of A and B; A(x,y), B(x,y)
are the intensity of pixel at (x,y) on A and B respectively.
Formally, assuming that I and J are two consecutive frames; M
and N are the number of row and column of dividing grid. The
scheme of temporal change detection between two consecutive
frames, I and J, in this approach is described as follows:
1. Divide I and J into MN regions, denote I
k
and J
k
for k = 1,
, MN (Figure 2);
2. Calculate correlation-coefficient between k-th region I
k
and
corresponding region J
k
, and then assign CC(I
k
,J
k
) to CH

k

for k = 1, , MN;
( , ), 1, , ;
k k k
CH CC I J for k M N
  

3. Establish the change map using correlation-coefficient as
1 ( , ) , 1, ,
( , )
0
k k
if x y I and CH T k M N
CMCC x y
otherwise
   





For evaluation, 8 videos with the 320240 resolution with total
1200 frames are used. For comparison, frames difference model,
background subtraction model, and proposed model are

11
implemented. The evaluation of time performance is shown in Table
1, and the quality of temporal change detection is shown in Figure 3.


Figure 2. The scheme of partition of two frames for temporal analysis
Method
Time performance per frame (Milliseconds)
Frames difference
23.7
Background subtraction
38.8
CMCC
24.7
Table 1. The comparison of time performance

Figure 3. An example results of three temporal change detection techniques

Figure 4. The ROC curve of temporal changes detection
Figure 4 shows the ROC (Receiver Operating Characteristic)
curve of temporal changes detection for threshold T. Rely on this
evaluation, when the threshold T = 0.025 then true positive fraction
equal to 95% and false positive fraction is 6%.
a)
b)
d)
c)
e)

12
3.2.2 Textural analysis
Intuitively, fire has unique visual signatures such as color and
texture. The textural features of a fire region includes average values
of red, green, blue components, skewness of red component
histogram, and surface coarseness are used. Denote PFR as potential

fire region, the textural features on PFR are computed as follows:
average values of red
1
( , )
( , )
x y PFR
x R x y K



, green
2
( , )
( , )
x y PFR
x G x y K



, and blue
3
( , )
( , )
x y PFR
x B x y K




component. Call p’(r) as normalized histogram of red component in

PFR, m
2
and m
3
are the variance and the third moment of p’(r) the
skewness of p’(r),
4 3
2
3
2
x m m

, is consider a textural feature. Call
p(r) as normalized histogram of gray level in PFR, variance,
2
5
1
0
( )*( )
L
r
p r r mx


 

and third moment
3
6
1

0
( )*( )
L
r
p r r mx


 


of p(r) are two features and, L is the number of gray level in image.
, the fourth and fifth of textural features are x
4
=m
2
and x
5
=m
3
. T
For each candidate region, eight features as mentioned above are
evaluated to construct vector v as:
1 2 3 4 5 6 7 8
, , , , , , ,
T
v x x x x x x x x

 
 


and v is used to indicate candidate region contained fire or not by
applying Bayes classifier. Let
( )
FR
g v
and
( )
NR
g v
are decision
functions for fire and non-fire, the textural model for a potential fire
region PFR is defined as:
1 ( ( ) ( ))
( )
0
FR NR
if g v g v
TextureF PFR
Otherwise







Figure 5. The number of misclassified pixels in comparison with TextureF

13
In comparison with the model ColorF and Chen, the total

number of misclassified of three methods is shown in Figure 5.
3.2.3 Flickering analysis
In order to describe the flickering of fire, this work uses the
change of width and height of fire region to distinguish from non-fire
region. Three consecutive frames and their size of fire region are
shown in Figure 6.

Figure 6. Three consecutive frames and its size of fire regions
Assuming that PFR is potential fire region, a and b are the width
and the height of rectangle that contains PFR; r is the ratio between a
and b; the flickering of fire region leads to the change of a, b and r.
To estimate the change of a, b and r between two consecutive
frames, firstly the changes of these parameters are computed as:
1 2 0
1
0
c
if a a a
a
otherwise

 





,
1 2 0
1

0
c
if b b b
b
otherwise

 





,
1 2 0
1
0
c
if r r r
r
otherwise

 






where a
1

, b
1
and r
1
are computed from potential fire region on
previous frame; a
2
, b
2
and r
2
are computed from fire region on
current frame; and a
0
, b
0
and a
0
are three experimental thresholds.
Finally, the flickering of PFR is defined as:
1 ( ) 2
( )
0
c c c
if a b r
FlickerF PFR
otherwise
  






3.3 A novel approach to spatial structure extraction
This section presents a novel model for fire-region verification.
The spatial structure of fire region is considered in term of rings and
top features of fire region.
3.3.1 Rings feature of fire region
Assuming

is the set of pixels in image I that satisfied:
{( , ) : ( , ) }
x y I Fire x y True
   
in which, Fire(x,y) is a model of

14
fire-color. Using fuzzy clustering technique, Fuzzy C-Mean, to
cluster

(in RGB space) into
K
class, an example with K = 3 are
shown in Figure 7.

Figure 7. An example of rings feature of a fire region
Let
(1)

,

(2)

, ,
( )
K

are K class of

by FCM clustering.
Consider a pixel
k
p I

at (x,y), the neighboring of p
k
with the same
column or same row, denote
4
( )
k
O p
, defined as:
4
( ) { ( ', ') :| ' | | ' | 1}
k
O p P x y I x x y y
     

where P(x’,y’) is a pixel at (x’,y’) in I. Let
(0)

 
, the set

has
spatial ring feature if
K
partition
(1)

,
(2)

, ,
( )
K

of


satisfied:
 
( ) ( 1) ( ) ( 1)
4
( ) :
1, , 1
i i i i
p M O p M
i K
 
          

 
(*)
So that ruler to check

or image I has rings feature is defined as:
1 (*)
( )
0
if Expression
r
otherwise

 



3.3.2 Top feature of fire region
Intuitively, the hot air of fire is less dense than the surrounding,
and moving onto the tops tends to form in the middle. To capture this
characteristic of fire region the top structure of fire is described as
follows. Firstly, find out two points,
( , )
A x y
and
( , )
B z y
, in

that
satisfy:

| | {| |: ( , ) , ( , ) }x z max i k M i j N k j
      
. With
( , ), ( , )
A x y B z y
are chosen above, a part of

, denote
AB

, that lay
above of AB is determined as follows:
{ ( , ) : }
AB
P i j j y
   
.
Secondly, choose a point
( , )
AB
C a b

such that
b y

for

( , )
AB
M x y


. Then two following parameters can compute as:

15
1
| { , } | / | |
AB AB
P ABC P
    

,
2
|{ , } | / | |
AB AB
P P ABC
    

.

Figure 8. An example of top feature of a fire region
The ruler to check

or image I has top feature is defined as:
1 01 2 02
1
(
0
)
and
otherwis

if
t
e
     

 



in which,
01

and
02

were chosen by experiment. If triangle ABC
contains most of the pixels in the upper part of fire region, it is clear
that the value of
1

and
2

are not large. Figure 8 describes
structure of the flame with the triangle ABC which described above
3.3.3 Experiments
In these experiments 563 images are used, these are divided into
three categories: A) images containing a single fire region at the
early stages of fire, 157 images, B) images containing complex when
the fire broke out, 185 images, C) images do not contain the fire, but

there are some fire-like objects. For each group, 20 images are
selected randomly and results are shown that separation between
Group A and B is quite clear. The value of
1

and
2

in Group A
and B is almost small. The evaluation of rings feature on tested
images is shown in Table 2.
Ring detected
Image group
Number
of Image
Number
%
Group A - Image with one fire region
157
155
99
Group B - Image with some fire regions
185
12
6
Group C - Image without fire region
221
203
92
Table 2. The results of test for rings feature with three image groups

3.4 Summary
The visual features of fire region play an important role in
vision-based fire detection. In this study, five visual features of fire
region are examined in detail; four new models of pixel or fire region
segmentation are developed, these include a model of fire-color pixel
([9]), a model of temporal change detection ([1], [2]), a model of

16
textural analysis and a model of flickering verification ([1], [2], [4]);
and a novel model of spatial structure of fire region ([7]).
CHAPTER 4. EARLY FIRE DETECTION BASED ON COMPUTER VISION
This chapter presents three models of fire detection based on
computer vision: early fire detection in general use-case, early fire
detection in weak-light environment, and early fire detection in
general use-case using SVM.
4.1 Early fire detection in the normal use-case
4.1.1 General use-case
This section presents an approach to the problem of early fire
detection based on computer vision for using in general use-case
conditions with assumptions: camera is static; burning material is
popular such as paper, wood, etc.; and circumstance of fire is not too
far from camera.
4.1.2 The algorithm EVFD
The model is a combination of temporal analysis using
correlation coefficient, color analysis based on RGB color space, and
flickering analysis as shown in Figure 9.

Figure 9. The scheme of EVFD algorithm
The detail of the EVFD algorithm: two consecutive frames,
previous frame I, current frame J have size of mn are inputs. The

output is boolean variance, A, TRUE to notice that J contains fire and
FALSE for otherwise.
The algorithm EVFD
 Input: Previous frame I, current frame J, integer d, h
 Output: Boolean variable A (TRUE - fire, FALSE - non-fire)
1. Declare and initialize some variables
1. Temporal change detection using CMCC
2. The color detection using ColorF
3. The flicker analysis using FlickerF
Input: Frames
Output: Fire alarm

17
Int a, b; A = FALSE; a = m/h; b = n/d;
2. Computer the change map CM by using CMCC model:
a. Calculate correlation-coefficient between a region on I and
corresponding region on J, and assign to CH(k, q)
For k =1 to h
For q =1 to d
1 1
2 2
1 1 1 1
([ 1] ,[ 1] ) ([ 1] ,[ 1] )
( , )
([ 1] ,[ 1] ) ([ 1] ,[ 1] )
a b
i j
a b a b
i j i j
I k a i q b j J k a i q b j

CH k q
I k a i q b j J k a i q b j
 
   
        

        
 
   

b. Establish the change map based on correlation-coefficient,
CM, as follows:
For x =1 to m
For y =1 to n
1 ( \ 1, \ 1)
( , )
0
if CH x M y N T
CM x y
otherwise
  




;
3. Establish the potential fire region, PFR, based on the color clue by
using ColorF(x,y) model from the change map CM:
a. Detect potential fire mask PFM:
For x =1 to m

For y =1 to n
1 ( ( , ) 1 ( , ) 1
( , )
0
if CM x y and ColorF x y
PFM x y
otherwise
 





b. Establish the potential fire region PFR
PFR = {(x,y): PFM(x,y) = 1};
4. Verify the flickering property of PFR using FlickerF(PFR):
a. If PFR is empty then go to step 6;
b. Compute the flickering FF = FlickerF(PFR);
5. If (FF = 1) then A = TRUE;
6. Return A;
4.1.3 Experiments
To evaluate this proposal, the EVFD algorithm, 8 videos which
consists of 4 indoor videos and 4 outdoor videos are used, and the
video resolution is 320240. For comparison, the model by T. H.
Chen et al. - denote Chen, the model by O. Gunay et al. - denote
Gunay are implemented (only on color detection and motion
detection). The results on the tested videos are shown in Table 4,
Table 5; and the evaluation of time performance and total number
frames false detection shown in Table 6.


18
First frame detected has fire
Video
First frame
has fire
Chen
Gunay
EVFD
Video 1 - Indoor 1 with fire
2
2
3
2
Video 2 - Indoor 2 with fire
2
2
3
3
Video 3 - Indoor 3 with fire
104
2
2
105
Video 4 - Indoor 4 no fire
-
-
12
-
Video 5 - Outdoor 1 with fire
2

2
3
2
Video 6 - Outdoor 2 with fire
2
2
3
5
Video 7 - Outdoor 3 with fire
2
2
2
9
Video 8 - Outdoor 4 no fire
-
2
2
8
Table 4. The first frame detected has fire in comparison with EVFD
Number of frames detected
has fire
Video
Number of
frames
have fire
Chen
Gunay
EVFD
Video 1 - Indoor 1 with fire
150

150
149
142
Video 2 - Indoor 2 with fire
150
142
149
101
Video 3 - Indoor 3 with fire
23
88
149
5
Video 4 - Indoor 4 no fire
0
0
98
0
Video 5 - Outdoor 1 with fire
150
150
149
74
Video 6 - Outdoor 2 with fire
150
150
149
100
Video 7 - Outdoor 3 with fire
27

89
137
4
Video 8 - Outdoor 4 no fire
0
150
149
49
Table 5. The number of frames detected has fire in comparison with EVFD
Method
Time performance per frame
(Milliseconds)
Total number frames
false detection
Chen
23.4
357
Gunay
39.3
487
EVFD
20.0
273
Table 6. The evaluation of time and total number of false detected frames
4.2 Early fire detection in weak-light environment
In this section, the problem of fire detection based on computer
vision in weak-light environment (WLE) is considered. In this
condition, the flame is small and brighter than the background, fire
region has a high contrast to its surrounding and it exhibits a
structure of nested rings of colors.

4.2.1 The weak-light environment
Assuming p(r) is a normalized gray-level histogram of image I,
( )
r
p r n n

where r[0,…, L-1], and L is the number of gray level in
images, n
r
is the total number of pixels with gray level r and n is the

19
total number of pixels in I. In weak light environment average gray
level of I,
1
0
* ( )
L
r
M r p r




, is small, and uniformity of I,
1
0
( )* ( )
L
r

U p r p r




, is larger. Using these parameters, to verify
light of environment is weak as follows:
0 0
1 )
)
0
(
(( )
If M M Uand
Otherwise
U
e I
 





where M
0
and U
0
are predetermined thresholds.
4.2.2 The algorithm EVFD_WLE
In this scheme, the condition of WLE is checked firstly, then

using ColorF to establish the potential fire region. Finally, analyses
spatial structure to confirm potential fire region as fire or non-fire
region. The scheme of EVFD_WLE is shown in Figure 10.

Figure 10. The scheme of EVFD_WLE algorithm
The detail of the EVFD_WLE algorithm: Input image I with the
size of mn. The output is a Boolean variance, A, it is TRUE to
notice that I contain fire and FALSE when J do not contain fire.
The algorithm EVFD_WLE
 Input: image I;
 Output: Boolean variable A;
o A = TRUE if I contain fire
o A = FALSE if I do not contain fire
1. A = FALSE;
2. Verification of the environment light:
a. Calculate intensity histogram p(r)
b. Calculate M and U
For r =1 to L
M = M + r  p(r); U = U + p(r)  p(r);
c. If M M
0
and UU
0
then goto 3, otherwise goto 6;
1. WLE verification
2. The color detection using ColorF
3. Spatial structure verification
Input: Frames
Output: Fire alarm


20
3. Establish the potential fire region, , based on the color clue by
using ColorF(x,y):
{( , ) :1 ;1 ; ( , ) }
x y x m y n ColorF x y True
      

4. Analyze the top feature of 
a. Find out A, B, C as described in section 3.3.2;
b. Compute 
1
, 
2

1
| { , } | / | |
AB AB
p ABC p
     


2
| { , }| / | |
AB AB
p p ABC
      


c. If 
1

 
01
and 
1
 
01
then goto 5, otherwise goto 6;
5. Analyze the rings feature of 
a. Cluster  into K class using Fuzzy C-Mean algorithm
b. Verify rings feature of 
( )
Rings r
 

c. A= Rings
6. Return A;
4.2.3 Experiments
These experiments use 7 videos which consist of 4 videos with
fire in weak-light environment, 2 videos indoor, and one video
outdoor with the resolution is 320240. In this study, the values of
M
0
, U
0
, 
10
, 
20
are assigned 50, 0.05, 0.55 and 0.55 respectively. The
performance of proposed method here EVFD_WLE is computed and

in comparison with model by T Ho Chen et al. - denote Chen,
model by O. Gunay et al. - denote Gunay. The results on the tested
videos are show in Table 7.
Number of frames
detected
Video
Number
of
frames
Number of
frames
with fire
Chen
Gunay
EVFD_WLE
Video 01 - WLE, camera is static
150
50
90
94
45
Video 02 - WLE, camera is static
49
49
34
48
39
Video 03 - WLE, camera is dynamic
150
113

121
124
108
Video 04 - WLE, camera is dynamic
150
70
41
44
58
Video 05 - Indoor, camera is static
150
0
129
149
0
Video 06 - Indoor, camera is static
150
0
130
149
0
Video 07 - Outdoor, camera is static
150
0
126
149
0
Table 7. The number of frames detected in comparison with EVFD_WLE
4.3 Early fire detection in the normal use-case using SVM
This section presents an approach to the problem of early fire

detection based on computer vision for using in general use-case

21
conditions and focus on reducing the computational complexity and
improving the accuracy of the algorithm.
4.3.1 The approach
This section presents a novel model for early fire detection
based on computer vision, it include three main tasks: pixel-based
processing to identify potential fire regions, region-based statistical
feature extraction, and a support vector machine classification to
verify a potential fire region as fire or non-fire region. In pixel-based
processing phase, color detection using the model ColorF, then a
potential fire region mask is computed. In region-based phase, for
each potential fire region in a potential fire image an 8-feature vector
is evaluated then a SVM classifier is used for distinguishing a
potential fire region as fire or fire-like object.
4.3.2 The algorithm EVFD_SVM
The scheme of this approach is shown in Figure 11.

Figure 11. The scheme of EVFD_SVM algorithm
The algorithm EVFD_SVM
 Input: previous frame I and current frame J;
 Output: Boolean variable A;
o A = TRUE if J contain fire
o A = FALSE if J do not contain fire
1. A = FALSE;
2. Pixel-based processing
a. Calculate the potential color mask, ClM, on I
For x =1 to m
For y =1 to n

ClM(x,y) = ColorF(x,y) (on I);
b. Compute the potential fire mask, FM as
For x =1 to m
1. Pixel - based processing
2. Region - based processing
3. SVM classifier
Input: Frames
Output: Fire alarm

22
For y =1 to n
1 | ( , ) ( , ) | ( , ) 1
( , )
0
if I x y J x y and ClM x
otherw
y
FM x y
ise
  





c. Remove small components and recover fire mask FM:
For x =1 to m
For y =1 to n
If (FM(x,y)=0) and FM(x+x,y+y)=1 then FM(x,y)=1
3. Region-based processing

a. Reconstruct potential fire image, FI as following
( , ) :1 ;1 ; ( , ) 1}{
     
FI I x y x m y n FM x y

b. If FI =  then goto step 5, else establish feature vector v as
following:
 
1 2 3 4 5 6 7 8
, , , , , , ,

T
v x x x x x x x x

where x
1
,…, x
8
are described in section 3.2.2 and determine on
FI.
4. If svm(v)>0 then A = TRUE;
5. Return A
;

Videos
Number
Frames
First
frame
with fire

First
frame
detected
Number
frames
with fire
Number
frames
detected
False
detected
(%)
Fire in room 1
150
2
3
150
120
20(-)
Fire in room 2
150
2
3
150
136
9(-)
Fire outdoor 1
50
2
3

50
36
29(-)
Fire outdoor 2
350
2
5
350
293
16(-)
Fire outdoor 3
150
2
4
150
96
36(-)
Fire of lighter
100
49
50
52
13
75(-)
Fire of candle
500
2
278
500
9

98(-)
Moving object
150
-
-
0
0
0(+)
Traffic 1
150
-
-
0
0
0(+)
Traffic 2
150
-
-
0
0
0(+)
Table 8. The performance of EVFD_SVM algorithm
4.3.3 Experiments
To evaluate, a set of 10 videos which consists of 7 fire videos
and 3 non-fire videos are used, and the video resolution is 320240.
The performance of algorithm is shown in Table 8. The third column
in Table 8 is the order of the first frame in video sequences that
contains fire, and the fourth column is the order of the first frame in
video sequences in which fire was detected. For the video from 1 to


23
6, this method detects fire immediately after maximum 3 frames. For
the case of fire of candle (video 7), the method can not detect fire as
soon as possible because fire circumstance was in close space with
nothing of movement and color of pixel of fire.
4.4 Summary
This chapter presents three models of fire detection based on
computer vision: early fire detection in general use-case ([1], [2],
[5]), early fire detection in weak-light environment ([3], [5], [6]), and
early fire detection in general use-case using SVM([8]).
CHAPTER 5. CONCLUSIONS AND DISCUSSIONS
Automatic fire detection has been interested for a long time
because fire causes large scale damage to humans and our properties.
Although fire detection devices have proven its usefulness, they have
some limitations themselves; they are generally limited to indoors
and require a close proximity to the fire; most of them can not
provide additional information about fire circumstance. New
approach to automatic fire detection based on computer vision offers
some advantages over the traditional detectors and can be used as a
complement for existing systems, and it is potential to alarm early.
This research concentrated on early fire detection based on
computer vision. Firstly, some techniques that have been used for in
the literature of automatic fire detection are reviewed. Secondly,
some visual features of fire region are examined in detail; four new
models of pixel or fire region segmentation and a novel model of
spatial structure of fire region. Finally, three models of fire detection
based on computer vision at the early state of fire are shown: a model
of early fire detection in general use-case - EVFD, a model of early
fire detection in weak-light environment - EVFD_WLE, and a model

of early fire detection in general use-case using SVM - EVFD_SVM.
5.1 Summary of contributions
Following the aim of the research, contributions that had done in
this work can be summaries as follows:
1. Develop and propose some methodsl of visual features of fire
region extraction. Develop four new methods of pixel or fire region
segmentation, these include a method of fire-color pixel based on
Bayes classification in RGB space, a method of temporal change
detection, a method of textural analysis and a method of flickering

×