Tải bản đầy đủ (.pdf) (20 trang)

Apply incremental learning for daily activity recognition using low lever sensor data = áp dụng học tăng cường cho bài toán nhận dạng hành động hàng ngày sử dụng dữ liệu từ cảm ứng đơn giản

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (475.71 KB, 20 trang )

VIETNAM NATIONAL UNIVERSITY, HANOI
UNIVERSITY OF ENGINEERING AND TECHNOLOGY

TA VIET CUONG

APPLY INCREMENTAL LEARNING FOR DAILY
ACTIVITY RECOGNITION USING LOW LEVEL
SENSOR DATA

MASTER THESIS OF INFORMATION TECHNOLOGY

Ha Noi, 2012


VIETNAM NATIONAL UNIVERSITY, HANOI
UNIVERSITY OF ENGINEERING AND TECHNOLOGY

TA VIET CUONG

APPLY INCREMENTAL LEARNING FOR DAILY
ACTIVITY RECOGNITION USING LOW LEVEL
SENSOR DATA

Major: Computer Science
Code: 60.48.01

MASTER THESIS OF INFORMATION TECHNOLOGY

Supervised by Assoc. Prof. Bui The Duy

Ha Noi, 2012




Table of Contents
1 Introduction
1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 Our Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2
2
3
4

2 Related Work
2.1 Daily Activity Recognition in Smart Home System . . . . . . . . . .
2.2 Daily Activity Recognition approach . . . . . . . . . . . . . . . . . .
2.3 Incremental learning model . . . . . . . . . . . . . . . . . . . . . . .

5
5
6
7

3 Framework for Activity
3.1 Data Acquisition . .
3.2 Data annotation . . .
3.3 Feature extraction .
3.4 Segmentation . . . .

Recognition in

. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .

the Home Environment
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .

4 Growing Neural Gas model
4.1 GNG structure . . . . . . . . . . . . . . . . . . . . .
4.1.1 Competitive Hebbian Learning . . . . . . . .
4.1.2 Weights Adapting . . . . . . . . . . . . . . . .
4.1.3 New Node Insertion . . . . . . . . . . . . . . .
4.2 Using GNG network for supervised learning . . . . .
4.2.1 Separated Weighted Euclid . . . . . . . . . . .
4.2.2 Reduce Overlapping Regions by Using a Local

.
.
.
.

10
10
12
12
14


. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
Error Threshold

15
15
17
18
19
23
24
25

5 Radial Basic Function network
27
5.1 Standard Radial Basic Function . . . . . . . . . . . . . . . . . . . . . 27
5.2 Incremental Radial Basic Function . . . . . . . . . . . . . . . . . . . 29

iv


TABLE OF CONTENTS

v


6 Experiment and Result

31

7 Conclusion
35
7.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
7.2 Future works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36


Apply Incremental Learning for Daily Activity
Recognition Using Low Level Sensor Data

Abstract
Daily activity recognition is an important task of many applications, especially in an environment like smart home. The system needs
to be equipped with the recognition ability so that it can automatically adapt to resident’s preferences. So far, a great deal number
of learning models has been proposed to classify activities. However,
most of them only work well in off-line manner when training data is
known in advance. It is known that people’s living habits change over
time, therefore a learning technique that should learn new knowledge
when there is new data in great demand.
In this thesis, we improve an existing incremental learning model
to solve this problem. The model, which is traditionally used in unsupervised learning, is extended for classification problems. Incremental
learning strategy by evaluating the error classification is applied when
deciding whether a new class is generated. A separated weighted Euclid is used to measure the distance between samples because the large
variance of information contains in feature vectors. To avoid constant
allocation of new nodes in the overlapping region, an adaptive insertion constraint is added. Finally, an experiment is carried to assess
its performance. The results show that the proposed method is better
than the previous one. The proposed method can be integrated into
a smart system, which then pro-actively adapts itself to the changes

of human daily activity pattern.

1

Introduction

Smart home is considered as a part of Ubiquitous Computing trend. It aims
to enrich the home environment with intelligent devices with the purpose of
1


supporting the residents living inside the environment [4]. Understanding the
users’ activities is one of the key features of a smart home system. Regular
activities which are performed in the home can be divided into two types.
One type of activities is characterized as simple gestures of some part of
the body such as running or walking. The other one is the set of actions
which become a pattern in users’ daily routines. Some examples of this type
are reading, cooking or watching television. Recognizing these activities will
provide useful information for understanding the environment context. In
our thesis, we focus on the problem of recognizing the second types of the
activities.
The most common data source in the smart home system is low-level
sensor information [4]. There have been many propose approaches for recognizing activities from low level sensory data ([20, 3, 21]). One of the properties of daily activities in the home environment is its variation through time
because the user’s habits are change in real life. To deal with this problem, previous approach must have to train the models again when there are
changes in activities patterns. However, this is a resource consuming process.
In our thesis, we apply incremental learning model into the problem of
activity recognition to resolve the above problem. The incremental learning
models are based on the Growing Neural Gas (GNG) network [8]. We extend
the growing neural gas model for activity recognition in two ways. The first
approach is to use multiple GNG networks. For each activity class, we train

a separated network using samples of the class. Technically, the weak point
of the traditional growing neural is its network size constantly grows based
on the error evaluation for new sample insertion. This learning strategy
encounters the performance problem in case classes overlap. We propose
the use of another metric for measure distance in the GNG network and a
constraint to the growing condition with the purpose to make the models
more balance between each class. The second approach is to create a Radial
Basic Function (RBF) network [16] from the GNG network. This approach
is similar to the method proposed in [7].
We carry out an experiment to compare both models. Performance is
evaluated on real daily activity dataset. The experimental results show that
the task of recognizing daily activity can achieve good result with incremental
learning approach.
The remaining of thesis is organized as follows. In Section 2, we present
the works related to activity recognition and incremental learning. In Section 3, we describe more detail of activity recognition problems including
2


data acquisition, data segmentation and feature extraction. In Section 4, we
introduce the structures of the GNG network and the way of using it for
daily activity recognition problem. In Section 5, the incremental version of
the RBF network based on the GNG network is presented. An experiment
with real data set is presented in Section 6. In Section 7, we discuss the
summary of our thesis and future works.

2

Related Work

In recent years, there are many researches in building a system for smart

home environment. In these systems, the activity recognition is considered
as a part of the context recognition module. While the context recognition
module is refer to understand a wide range of knowledge in the intelligent
environment, the main purpose of activity recognition is to extract the information of what the resident is doing. The task of recognizing activities
depends on which sensory data the system can perceive from the real world.
There are a variety of sensor types in a smart home system. Sensors can be
used for position measurement, detecting motion, detecting sound, reading
temperature. These sensors usually create many data streams [14], which
have to be combined in order to produce more useful information. Comparing to other types of receivers like cameras or microphones, using low level
sensors offers low cost in building the sensor network and transparency. The
data generated by low level sensors is quite easy to collect and process in
comparison to other types of device like camera.

2.1

Daily Activity Recognition approach

There are many proposed models for recognizing daily activities. In [20], they
built an activity recognition model based on Naive Bayes classifiers. In [23],
a multilayer model is used to detect walking movements from passive infrared
motion detectors placed in the environment. In [3], they proposed a framework for activity recognition including feature extraction, feature selection
and predicting models. The framework use the four popular machine learning methods including Bayes Belief Networks, Artificial Neural Networks,
Sequential Minimal Optimization and Boosting.
One of the properties of daily activity is the change of their patterns
because of the vary of users’ habits. This makes the decision boundary can
3


change through time. Proposed methods requires to train the models again
when the changes occur. Incremental learning approach can be used to solve

this problem.

2.2

Incremental learning model

The approach of incremental learning is aim to handle the problem of learning
new knowledge while maintaining the previous one [11]. In unsupervised
approach, the well known clustering algorithm k-means [13] can be learned
on-line by a weight update rule. An artificial neural network, namely SelfOrganized Map (SOM) [12], is presented as a unsupervised learning method
using incremental approach. More flexible networks with similar approach
are Growing Cell Structures [7] and Growing Neural Gas [8].
In supervised learning, there are several efforts to adapt the exist offline method to use in incremental training. With the inspiring of adaptive boost algorithm(AdaBoost) [6], [17] proposed an incremental learning
method, namely Learn++. It uses the MultiLayer Perceptron network as a
week learner to generate a number of weak hypotheses. Radial Basic Function (RBF) networks [16] combine a local representation of the input space
and a perceptron layer to fit the input pattern with its target. [9] proposed a
method to insert new nodes into the RBF network. Using this approach, the
RBF network is capable for learning overtime. Another approach is Fuzzy
ARTMAP [2] which based on Adaptive Resonance Theory (ART) networks.

3

Framework for Activity Recognition in the
Home Environment

In this section, we present the framework of daily activity recognition using
low level sensor data in the home environment. Its main purpose is to map
the data from the environment to a sequence of activities. We adapt the
proposed framework in [3] to emphasize the incremental learning properties
of the framework. Figure 1 illustrates the steps in an activity recognition

module using low-level sensor data. The information of the surrounding environment comes under a stream of sensor events. Then, the stream can be
annotated and split into labelled samples. The labeled samples are then provided to the incremental learning model for training. In online recognition,
the stream is segmented into separated sequences of sensor events. Each
4


Figure 1: Framework for activity recognition
separated sequence is then classified by the learned model. By applying incremental learning, the model is trained continuously over the life span of
the system whenever there is new labelled data.

3.1

Data Acquisition

In the data acquisition phase, sensors are implemented around the home.
The sensors are low level, which are differ from cameras or microphones.
They continuously monitor the environment for some specific information.
There are many types of information in the home environment that can be
monitored by state change sensor such as door, picking object or temperature.
Choosing the types of sensor are included totally depends on the system’s
design.
The data from each low level sensor is collected and combined into a
stream of data at a central server. Each sensor’s signal is considered as an
event in the stream. An example of event stream [5] is given in table 1. Each
event has a time stamp, the name of the sensor and its value. There are
several platforms for collecting and processing the data stream in the smart
home system such as Motes platform [1] or OSGI platform [10].

5



Table 1: An example of sensor
Date
Time
2009-10-16 08:50:00
2009-10-16 08:50:01
2009-10-16 08:50:02
2009-10-16 08:50:13
2009-10-16 08:50:17

3.2

events in data stream.
Sensor Value
M026
ON
M026 OFF
M028
ON
M026 OFF
M026 OFF

Data annotation

For the purpose of training model, the stream data needs to be segmented
and labelled into separated activities. In a smart home system, data can be
annotated directly by using devices such as Personal Digital Assistant [20].
An alternative way is labelling the stream data indirectly through a post
processing step. The users can label their activities with a visualization tool
[3]. Because the data is captured from various sensors and span over a long

time, it is difficult to determine the right starting and ending point of an
activity. After this step, each sensor event in the stream data is added with
an optional action label.

3.3

Feature extraction

Because the daily activities in the home environment usually happen around
a specific area, the motion sensors can produce good features for differing
the activity classes. In [3], the activated motion sensors during the period
of an activity is included into the feature vector. The feature vector also
uses high contextual information such as day of week, time of day, previous
activitiy, next activity and energy consumption. This approach achieves a
high accuracy with learning models such as multilayer perceptron or decision
tree.
In our approach, we use a similar method to extract feature for training
with online learning model. A feature vector of an activity sample includes:
• Length of the activity in second.
• Part of the day when the activity happens. It is divided into six separated parts: morning, noon, afternoon, evening, night, latenight,
6


• Day of week.
• Previous performed activity.
• Number of motion sensors are activated during the activity’s period,
and the number of value ”ON”.
• A list of motion sensors which are activated.
• Energy consumption.


3.4

Segmentation

In a real application of activity recognition, the stream of sensor events is
required to be segmented into separated subsequences before the recognition
models can classify them. In [18], they mine the data stream to discover the
trigger patterns and use these patterns to determine the start of an activity. However, the trigger patterns are not always clear enough to discover
because daily activities are often change and overlapped with each other. An
alternative approach is to use sliding time windows [19]. The data stream is
split into fixed-time windows and then classify them into one of the activity
classes. Although this method can generate a sequence of activity labels from
data stream effectively, it may have difficulty when the time window overlaps
with more than one activity.

4

Growing Neural Gas Network

In this chapter, we present an incremental learning model which is improved
from Growing Neural Gas (GNG) networks [8]. The GNG network is an unsupervised learning and has been applied into vector quantization, clustering,
and interpolation. Its structure is similar to SOM [12] network and Neural
Gas [15] network. The network produces a local representation of the input
space which is learned incrementally. In the first section, we introduce the
structure and learning rule of the network in unsupervised learning. Then,
we extend its structure into the supervised problems with some modifications
for improving the performance.

7



4.1

GNG Network Structure

A GNG network includes nodes and connections. Each node u in the network
is presented with reference vector wu in the input space V (V is a subset of
Rn ( and a local counter variable eu . The wu is considered as the position of
node u in the vector space and eu presents the density in the region around
u. The vector space is split into small regions, which takes each node of the
network as a center. The connections between nodes represent the structure
of the network. If two nodes are connected, their correspondent regions
are adjacent in V . The network extends the Competitive Hebbian Learning
principle [22] to learn its structure. The GNG network uses its structure to
adapt the nodes’ reference vector and to determine where to add new nodes.
When the network receives a new training sample x, the model will find
the nearest node s1 to x and assign x to the region of s1 . Then, the position
of s1 and all its topological neighbors n are moved toward x. The moving
distances are computed as follow:
The weight of s1 is adjusted by an amount of ∆s1 :
∆s1 =

b (x

− w s1 )

(1)

For each neighbour v of s1 , the weight vector wv is added by ∆v:
∆v =


n (x

− wv )

(2)

The network uses two constants b , n to control the adaptive of a node to a
new sample.
The network automatically increases its number of nodes after receiving a
fixed number λ of training samples. A new node will be placed into the highest density region; i.e. the region has maximum local error in the network.
More specifically, the adding algorithm finds a node q which eq is maximum.
Then it chooses f among the neighbors of q which ef is maximum. New node
u is set to the middle between q and f and the local error of the two nodes
q and f are decreased due to the introducing of a new node.

4.2

Apply growing neural gas model to activity recognition

The GNG network can be extended for supervised learning by creating a
separate network for each class. In unsupervised problems, the GNG constructs a network that represents the underlying structure of the input space
8


V . By using this property, it is possible to have an estimate of the error
when assigning an input vector to a specific input space. The error measure
is defined as the nearest distance from the input vector to all the nodes in
the network. Based on the error, the input vector then is classified into the
class which minimizes the error.

Assume there are L classes, then we create L different GNG networks,
namely G1 , G2 , ..., GL . Each network Gi is trained with all the input vectors
of the label i. Then, the distance from an input vector x to a network Gi
is defined as the distance from the nearest node in the network to the input
vector:
d(x, Gi ) = min||x − wu ||
u∈Ai

(3)

where Ai is the set of node in Gi .
We use the defined distance as the error when x is assigned to Gi . To
classify a new input sample x, we iterate all the trained networks and match
the input to the network which has the smallest error to x.

4.3

Separated Weighted Euclid

We use an adaptive version of weighted Euclid for a more suitable distance
measure. A GNG network G is associated with a standard deviation vector
σ = {σ1 , σ2 , ..., σN }. Each σi is the standard deviation for the dimension i of
the input space V . The σ vector is updated online in learning process.
Using the above σi , the distance between an input vector x and a weight
vector wu in G is calculated as:
N
u

d(x, w ) =
i=1


4.4

(xi − wiu )2
σi2

(4)

Reduce Overlapping Regions by Using a Local Error Threshold

We extend the approach in [9] to reduce the overlapping regions between
networks. Each node u is associated with an adding threshold tu of the local
error. If there is a node has local error counter e exceed the threshold, the
network will be extended. At the beginning, tu is assigned a value which is
9


the same for all nodes. Then, it is updated during the learning process. A
high value of tu means the region of u should not be inserted more nodes.
We update tu as follows:
1. Init tu = T which is the starting adding threshold.
2. Let (x, y) is the input vector and its label of a training sample
3. Find the nearest network Gi and the node u of Gi has the smallest
distance to x:
i = argmin(d(x, Gj ))
j∈[1..L]

u = argmin(d(x, v))
v∈Ai


4. If there is a false prediction i = y, then update the adding criterion tu
of u:
tu = (1 + β) ∗ tu

5

Radial Basic Function network

The Radial Basic Function (RBF) network [16] is a supervised learning
method. Typically, RBF network typically contains two layers: the hidden
layer and the output layer. Normally, the hidden layer represents a set of
local neurons which is similar to the cells of growing neural gas. Each neuron
has a weight vector indicates its position in the input space. The neuron will
be activated if the input vector is closed enough to it, by a Gaussian activation function. The neutrons in the hidden layer are connected to the neurons
in the output layer to to create a structure similar to a perceptron layer. The
main purpose of the perceptron layer is to produce a linear combination of
the activation of each neuron in the hidden layer.
Training a RBF network can be combined from an unsupervised learning
and a supervised one. The positions of hidden neurons in the hidden layer
can be found by using a cluster algorithm such as k-means. After that, the
weight connect from the hidden layer to the output layer can be found by
using a perceptron network with a gradient descent method. Besides, the
two steps can be done separately.
In the tradition RBF network model, one of its weaknesses is the fixed
number of neurons in the hidden layer. It is hard to determine how many
10


neurons are appropriate for the input space of the problems, especially in lifelong learning tasks. Furthermore, the approach of having fixed neurons in the
hidden layer can face difficult in cover the whole space if the dimension of the

input space becomes large. Using a similar approach as growing gas model, an
incremental version of Radial Basic Function can be trained without knowing
the number of neurons in the hidden layer.
The main difficult in the incremental approach is to compute the weight of
edges connect the hidden layer to the output layer. In the tradition approach,
a layer of perceptrons is built and the gradient descent method is used to
train until the error converge. Unfortunately, we can not apply this to the
incremental approach because we do not know which the set of inputs is fed
to the network. Besides that, when a node is added to the hidden layer, it
must be connect to the output layer. This can affect the already trained
weights. The criterion to stop in gradient descent training is not easy to
define here because we do not have the addition validation set or refer back
the error of previous input samples. However, [7] proposed to use only one
step of gradient descent instead of running gradient descent until the network
converges.

6

Experiment and Result

In this experiment, we apply the above incremental learning algorithm for
daily activity recognition. The activities are segmented and annotated. We
use dataset from the WSU CASAS smart home project [5] in our experiment.
This dataset is sensor data, which was collected from a house of a volunteer
adult in 3 months.
There are 33 sensors, which include motion sensors, door sensors and
temperature sensors. A motion sensor is used to discover the motion, which
happens around its location and takes ”ON” or ”OFF” value. A door sensor
is used to recognize opening and closing door activity and takes ”OPEN”
or ”CLOSE” value. A temperature sensor is used to sense a changing in

temperature for a specific threshold and takes real value. The entire data
set is annotated with the beginning and ending of activities. The number of
annotated activities is 2310 and the number of different types is 15.
We run our experiment with four described models: Fuzzy ARTMAP,
incremental RBF (RBF ), the traditional Growing Neural Gas model (GNG)
and the improve Growing Neural Gas model(new GNG).
11


Table 2: Accuracy of the
Training
train1
Fuzzy ARTMAP
68.74 %
RBF
58.79 %
GNG
70.56 %
new GNG
72.55%

four models.
train2
train3
72.29 %
71.26%
61.30 %
57.40%
71.17%
72.47%

74.81%
77.58%

The data set is randomly divided into two equal parts: train and test.
The first part is used to train the models and the second part is used for
evaluate. Each part has around 1100 samples. To simulate an incremental
learning process in the real system, the train part is further divided into three
equal parts train1 , train2 , train3 . These part are used to train the models
incrementally. More specifically, there are three training phases. In the first
training phase, the part train1 is used. Each sample in the train1 part is fed
only once in a random order to the model. Then, in the second phase, we use
samples in the train2 phase to train the model which is received after the first
phase. Next, we use samples in train3 to train the model from the second
phase. In each phase, the model is tested on the test data. The results are
presented in the table 2.
IIt can be seen clearly from the table that the Fuzzy ARTMAP, GNG, new
GNG improve their accuracy when they are provided more training data. In
the three models, the new GNG has the highest improvements and reaches
the best accuracy at 77.58% in the third training phase. Only the RBF
decreases substantially when the train3 is provided. In all three training
phases, both the Fuzzy ARTMAP and GNG model are approximately equal
while the RBF has a lowest figures. It can be explained by the different
between the RBF model and the other ones. The RBF combines the local
representation with a perceptron layer. However, in incremental learning,
training the perceptron layer until its converges is a difficult task. Using
one step of gradient descent really decrease the model’s accuracy compare
to the other three models. Hypercubes representation of Fuzzy ARTMAP
model and sphere representation of GNG model result in a similar accuracy.
However, adding more constraint for controlling the overlapped areas in new
GNG may increase the accuracy fairly (5% with the train3 data).


12


7

Conclusion

In this thesis, we presented an incremental approach for daily activity recognition in smart home system. The main focus of the problem is to classify
the data comes from low level sensor into different type of activities. The
proposed approach is based on an unsupervised learning method, namely
GNG network.
The experiment is carried out on real life data set. While the incremental
version of RBF network suffers from weight changing, the multiple GNG
networks approach has quite good performance in comparison to the Fuzzy
ARTMAP model. By changing the metric of distance measurement and
preventing the constant inserting of new nodes in the overlapping region, the
improved version can separated different activity classes well.
In the future, we are planing to employ more efficient method for feature
extraction from the sequence of sensor events. The method describing in our
thesis depends largely on the space properties of the activity patterns. It does
not use temporal information which are presented in the sequence of sensor
events. Therefore, it can have difficulties in classifying the activities which
usually happen in the same area. In addition to that, because the feature
vector combines different types of category data, finding a good distance
metric in the input space is difficult.

References
[1] G. Anastasi, M. Conti, A. Falchi, E. Gregori, and A. Passarella. Performance measurements of motes sensor networks. In In MSWiM 04:
Proceedings of the 7th ACM international symposium on Modeling, analysis and simulation of wireless and mobile systems, pages 174–181. ACM

Press, 2004.
[2] G A Carpenter, S Grossberg, N Markuzon, J H Reynolds, and D B
Rosen. Fuzzy artmap: A neural network architecture for incremental
supervised learning of analog multidimensional maps. IEEE Transactions on Neural Networks, 3(5):698–713, 1992.
[3] Chao Chen, Barnan Das, and Diane Cook. A data mining framework
for activity recognition in smart environments. Intelligent Environments
IE 2010 Sixth International Conference on, pages 80–83, 2010.
13


[4] Diane J Cook, Juan C Augusto, and Vikramaditya R Jakkula. Ambient
intelligence: Technologies, applications, and opportunities. Pervasive
and Mobile Computing, 5(4):277–298, 2009.
[5] Diane J Cook and M Schmitter-Edgecombe. Assessing the quality of
activities in a smart environment. Methods of Information in Medicine,
48(5):480–485, 2009.
[6] Yoav Freund and Robert Schapire. A desicion-theoretic generalization of
on-line learning and an application to boosting. Computational learning
theory, 904(1):23–37, 1995.
[7] Bernd Fritzke. Growing cell structures - a self-organizing network for
unsupervised and supervised learning. Neural Networks, 7:1441–1460,
1993.
[8] Bernd Fritzke. A growing neural gas network learns topologies. In
Advances in Neural Information Processing Systems 7, pages 625–632.
MIT Press, 1995.
[9] Fred H. Hamker. Life long learning cell structures continuously learning
without catastrophic interference. Neural Netw., 14(4-5):551–573, May
2001.
[10] Sumi Helal, William Mann, Hicham El-Zabadani, Jeffrey King, Youssef
Kaddoura, and Erwin Jansen. The gator tech smart house: A programmable pervasive space. Computer, 38(3):50–60, March 2005.

[11] Prachi Joshi and Parag Kulkarni. Incremental learning: Areas and methods a survey. International Journal of Data Mining and Knowledge
Management Process, 2(5), June 2012.
[12] T. Kohonen. Self-organization and associative memory: 3rd edition.
Springer-Verlag New York, Inc., New York, NY, USA, 1989.
[13] J. B. Macqueen. Some Methods for classification and analysis of multivariate observations. In Procedings of the Fifth Berkeley Symposium on
Math, Statistics, and Probability, volume 1, pages 281–297. University
of California Press, 1967.

14


[14] S. Madden and M. J. Franklin. Fjording the stream: An architecture
for queries over streaming sensor data. In Proceedings of the 18th International Conference on Data Engineering, ICDE ’02, pages 555–, Washington, DC, USA, 2002. IEEE Computer Society.
[15] Thomas Martinetz and Klaus Schulten. A neural-gas network learns
topologies. Artificial Neural Networks, 1:397–402, 1991.
[16] John Moody and Christian J. Darken. Fast learning in networks of
locally-tuned processing units. Neural Comput., 1(2):281–294, June
1989.
[17] R Polikar, J Byorick, S Krause, A Marino, and M Moreton. Learn++:
a classifier independent incremental learning algorithm for supervised
neural networks, 2002.
[18] P Rashidi and D J Cook. Keeping the resident in the loop: Adapting
the smart home to the user, 2009.
[19] Parisa Rashidi, Diane J Cook, Lawrence B Holder, and Maureen
Schmitter-Edgecombe. Discovering activities to recognize and track in
a smart environment. IEEE Transactions on Knowledge and Data Engineering, 23(4):527–539, 2011.
[20] E M Tapia, S S Intille, and K Larson. Activity recognition in the home
using simple and ubiquitous sensors. Pervasive Computing, 3001:158–
175, 2004.
[21] Tim Van Kasteren, Athanasios Noulas, Gwenn Englebienne, and Ben

Krse. Accurate activity recognition in a home setting. Proceedings of
the 10th international conference on Ubiquitous computing UbiComp 08,
344:1, 2008.
[22] Ray H White. Competitive Hebbian Learning. 1991.
[23] Christopher R Wren and Emmanuel Munguia Tapia. Toward scalable
activity recognition for sensor networks. Networks, 3987:168–185, 2006.

15


Publications
1. Multi-Agent Architecture For Smart Home, Viet Cuong Ta, Thi Hong
Nhan Vu, The Duy Bui, The 2012 International Conference on Convergence
Technology January 26-28, Ho Chi Minh, Vietnam.
2. A Breadth-First Search Based Algorithm for Mining Frequent Movements
From Spatiotemporal Databases Thi Hong Nhan Vu, The Duy Bui, Quang
Hiep Vu, Viet Cuong Ta, The 2012 International Conference on Convergence
Technology. January 26-28, Ho Chi Minh, Vietnam.
3. Online learning model for daily activity recognition, Viet Cuong Ta, The
Duy Bui, Thi Hong Nhan Vu, Thi Nhat Thanh Nguyen, Proceedings of The
Third International Workshop on Empathic Computing (IWEC 2012).

16



×