Tải bản đầy đủ (.pdf) (8 trang)

Báo cáo khoa học: "Topic Analysis for Psychiatric Document Retrieval" potx

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (310.25 KB, 8 trang )

Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 1024–1031,
Prague, Czech Republic, June 2007.
c
2007 Association for Computational Linguistics
Topic Analysis for Psychiatric Document Retrieval
Liang-Chih Yu*

, Chung-Hsien Wu*, Chin-Yew Lin

, Eduard Hovy

and Chia-Ling Lin*
*
Department of CSIE, National Cheng Kung University, Tainan, Taiwan, R.O.C.

Microsoft Research Asia, Beijing, China

Information Sciences Institute, University of Southern California, Marina del Rey, CA, USA
,{chwu,totalwhite}@csie.ncku.edu.tw,,


Abstract
Psychiatric document retrieval attempts to
help people to efficiently and effectively
locate the consultation documents relevant
to their depressive problems. Individuals
can understand how to alleviate their symp-
toms according to recommendations in the
relevant documents. This work proposes
the use of high-level topic information ex-
tracted from consultation documents to im-


prove the precision of retrieval results. The
topic information adopted herein includes
negative life events, depressive symptoms
and semantic relations between symptoms,
which are beneficial for better understand-
ing of users' queries. Experimental results
show that the proposed approach achieves
higher precision than the word-based re-
trieval models, namely the vector space
model (VSM) and Okapi model, adopting
word-level information alone.
1 Introduction
Individuals may suffer from negative or stressful
life events, such as death of a family member, ar-
gument with a spouse and loss of a job. Such
events play an important role in triggering depres-
sive symptoms, such as depressed moods, suicide
attempts and anxiety. Individuals under these cir-
cumstances can consult health professionals using
message boards and other services. Health profes-
sionals respond with suggestions as soon as possi-
ble. However, the response time is generally sev-
eral days, depending on both the processing time
required by health professionals and the number of
problems to be processed. Such a long response
time is unacceptable, especially for patients suffer-
ing from psychiatric emergencies such as suicide
attempts. A potential solution considers the prob-
lems that have been processed and the correspond-
ing suggestions, called consultation documents, as

the psychiatry web resources. These resources gen-
erally contain thousands of consultation documents
(problem-response pairs), making them a useful
information resource for mental health care and
prevention. By referring to the relevant documents,
individuals can become aware that they are not
alone because many people have suffered from the
same or similar problems. Additionally, they can
understand how to alleviate their symptoms ac-
cording to recommendations. However, browsing
and searching all consultation documents to iden-
tify the relevant documents is time consuming and
tends to become overwhelming. Individuals need
to be able to retrieve the relevant consultation
documents efficiently and effectively. Therefore,
this work presents a novel mechanism to automati-
cally retrieve the relevant consultation documents
with respect to users' problems.
Traditional information retrieval systems repre-
sent queries and documents using a bag-of-words
approach. Retrieval models, such as the vector
space model (VSM) (Baeza-Yates and Ribeiro-
Neto, 1999) and Okapi model (Robertson et al.,
1995; Robertson et al., 1996; Okabe et al., 2005),
are then adopted to estimate the relevance between
queries and documents. The VSM represents each
query and document as a vector of words, and
adopts the cosine measure to estimate their rele-
vance. The Okapi model, which has been used on
the Text REtrieval Conference (TREC) collections,

developed a family of word-weighting functions
1024
for relevance estimation. These functions consider
word frequencies and document lengths for word
weighting. Both the VSM and Okapi models esti-
mate the relevance by matching the words in a
query with the words in a document. Additionally,
query words can further be expanded by the con-
cept hierarchy within general-purpose ontologies
such as WordNet (Fellbaum, 1998), or automati-
cally constructed ontologies (Yeh et al., 2004).
However, such word-based approaches only
consider the word-level information in queries and
documents, ignoring the high-level topic informa-
tion that can help improve understanding of users'
queries. Consider the example consultation docu-
ment in Figure 1. A consultation document com-
prises two parts: the query part and recommenda-
tion part. The query part is a natural language text,
containing rich topic information related to users'
depressive problems. The topic information in-
cludes negative life events, depressive symptoms,
and semantic relations between symptoms. As in-
dicated in Figure 1, the subject suffered from a
love-related event, and several depressive symp-
toms, such as <Depressed>, <Suicide>, <Insom-
nia> and <Anxiety>. Moreover, there is a cause-
effect relation holding between <Depressed> and
<Suicide>, and a temporal relation holding be-
tween <Depressed> and <Insomnia>. Different

topics may lead to different suggestions decided by
experts. Therefore, an ideal retrieval system for
consultation documents should consider such topic
information so as to improve the retrieval precision.
Natural language processing (NLP) techniques
can be used to extract more precise information
from natural language texts (Wu et al., 2005a; Wu
et al., 2005b; Wu et al., 2006; Yu et al., 2007).
This work adopts the methodology presented in
(Wu et al. 2005a) to extract depressive symptoms
and their relations, and adopts the pattern-based
method presented in (Yu et al., 2007) to extract
negative life events from both queries and consul-
tation documents. This work also proposes a re-
trieval model to calculate the similarity between a
query and a document by combining the similari-
ties of the extracted topic information.
The rest of this work is organized as follows.
Section 2 briefly describes the extraction of topic
information. Section 3 presents the retrieval model.
Section 4 summarizes the experimental results.
Conclusions are finally drawn in Section 5.
2 Framework of Consultation Document
Retrieval
Figure 2 shows the framework of consultation
document retrieval. The retrieval process begins
with receiving a user’s query about his depressive
problems in natural language. The example query
is shown in Figure 1. The topic information is then
extracted from the query, as shown in the center of

Figure 2. The extracted topic information is repre-
Consultation Document
Query:
It's normal to feel this way when going through these kinds of struggles, but over
time your emotions should level out. Suicide doesn't solve anything; think about
how it would affect your family There are a few things you can try to help
you get to sleep at night, like drinking warm milk, listening to relaxing music
Recommendation:
After that, it took me a long time to fall asleep at night.
<Depressed>
<Suicide>
<Insomnia>
<Anxiety>
cause-effect temporal
I broke up with my boyfriend.
I often felt like crying and felt pain every day.
So, I tried to kill myself several times.
In recent months, I often lose my temper for no reason.

Figure 1. Example of a consultation document. The bold arrowed lines denote cause-effect relations; ar-
rowed lines denote temporal relations; dashed lines denote temporal boundaries, and angle brackets de-
note depressive symptoms

1025
sented by the sets of negative life events, depres-
sive symptoms, and semantic relations. Each ele-
ment in the event set and symptom set denotes an
individual event and symptom, respectively, while
each element in the relation set denotes a symptom
chain to retain the order of symptoms. Similarly,

the query parts of consultation documents are rep-
resented in the same manner. The relevance esti-
mation then calculates the similarity between the
input query and the query part of each consultation
document by combining the similarities of the sets
of events, symptoms, and relations within them.
Finally, a list of consultation documents ranked in
the descending order of similarities is returned to
the user.
In the following, the extraction of topic informa-
tion is described briefly. The detailed process is
described in (Wu et al. 2005a) for symptom and
relation identification, and in (Yu et al., 2007) for
event identification.
1) Symptom identification: A total of 17 symp-
toms are defined based on the Hamilton De-
pression Rating Scale (HDRS) (Hamilton,
1960). The identification of symptoms is sen-
tence-based. For each sentence, its structure is
first analyzed by a probabilistic context free
grammar (PCFG), built from the Sinica Tree-
bank corpus developed by Academia Sinica,
Taiwan (), to gen-
erate a set of dependencies between word to-
kens. Each dependency has the format (modi-
fier, head, rel
modifier,head
). For instance, the de-
pendency (matters, worry about, goal) means
that "matters" is the goal to the head of the sen-

tence "worry about". Each sentence can then
be associated with a symptom based on the
probabilities that dependencies occur in all
symptoms, which are obtained from a set of
training sentences.
2) Relation Identification: The semantic rela-
tions of interest include cause-effect and tem-
poral relations. After the symptoms are ob-
tained, the relations holding between symp-
toms (sentences) are identified by a set of dis-
course markers. For instance, the discourse
markers "because" and "therefore" may signal
cause-effect relations, and "before" and "after"
may signal temporal relations.
3) Negative life event identification: A total of 5
types of events, namely <Family>, <Love>,
<School>, <Work> and <Social> are defined
based on Pagano et al’s (2004) research. The
identification of events is a pattern-based ap-
proach. A pattern denotes a semantically plau-
sible combination of words, such as <parents,
divorce> and <exam, fail>. First, a set of pat-
terns is acquired from psychiatry web corpora
by using an evolutionary inference algorithm.
The event of each sentence is then identified
by using an SVM classifier with the acquired
patterns as features.
3 Retrieval Model
The similarity between a query and a document,
(, )Sim q d , is calculated by combining the similari-

ties of the sets of events, symptoms and relations
within them, as shown in (1).
Consultation
Documents
Ranking
Relevance
Estimation
Query
(Figure 1)
Topic Information
Symptom
Identification
Negative Life Event
Identification
Relation
Identification
D S
A
D S
Cause-Effect
D I A
Temporal
I
S I A
<Love>
Topic Analysis

Figure 2. Framework of consultation document retrieval. The rectangle denotes a negative life event re-
lated to love relation. Each circle denotes a symptom. D: Depressed, S: Suicide, I: Insomnia, A: Anxiety.


1026
(, )
(, ) (, ) (1 ) (, ),
Evn Sym Rel
Sim q d
Sim q d Sim q d Sim q d
αβ αβ
=
++−−
(1)
where
(, )
Evn
Sim q d ,
(, )
Sym
Sim q d
and (, )
Rel
Sim q d ,
denote the similarities of the sets of events, symp-
toms and relations, respectively, between a query
and a document, and
α
and
β
denote the combi-
nation factors.
3.1 Similarity of events and symptoms
The similarities of the sets of events and symptoms

are calculated in the same method. The similarity
of the event set (or symptom set) is calculated by
comparing the events (or symptoms) in a query
with those in a document. Additionally, only the
events (or symptoms) with the same type are
considered. The events (or symptoms) with
different types are considered as irrelevant, i.e., no
similarity. For instance, the event <Love> is
considered as irrelevant to <Work>. The similarity
of the event set is calculated by
(, )
1
(, )cos(, ) .,
()
Evn
qd qd
qd
eq d
Sim q d
Typeee ee const
NEvn Evn
∈∩
=+


(2)
where
q
E
vn and

d
E
vn
denote the event set in a
query and a document, respectively;
q
e and
d
e

denote the events;
()
qd
NEvn Evn∪
denotes the
cardinality of the union of
q
E
vn and
d
E
vn as a
normalization factor, and
(, )
qd
Type e e
denotes an
identity function to check whether two events have
the same type, defined as
1 ( ) ( )

(, ) .
0 otherwise
qd
qd
Type e Type e
Type e e
=


=



(3)
The cos( , )
qd
ee denotes the cosine angle between
two vectors of words representing
q
e and
d
e
, as
shown below.
()
()
1
2
2
11

cos( , ) ,
qd
qd
T
ii
ee
i
qd
TT
ii
ee
ii
ww
ee
ww
=
==
=

∑∑
(4)
where
w denotes a word in a vector, and T denotes
the dimensionality of vectors. Accordingly, when
two events have the same type, their similarity is
given as cos( , )
qd
ee plus a constant, const Addi-
tionally, cos( , )
qd

ee and const. can be considered
as the word-level and topic-level similarities, re-
spectively. The optimal setting of
const. is deter-
mined empirically.
3.2 Similarity of relations
When calculating the similarity of relations, only
the relations with the same type are considered.
That is, the cause-effect (or temporal) relations in a
query are only compared with the cause-effect (or
temporal) relations in a document. Therefore, the
similarity of relation sets can be calculated as
,
1
(, ) ( , ) ( , ),
qd
R
el qd qd
rr
Sim q d Type r r Sim r r
Z
=

(5)
() () () (),
Cq Cd Tq Td
Z
NrNr NrNr
=
+

(6)
where
q
r and
d
r denote the relations in a query and
a document, respectively;
Z denotes the normaliza-
tion factor for the number of relations;
(, )
qd
Type e e

denotes an identity function similar to (3), and
()
C
N i and ()
T
N i denote the numbers of cause-
effect and temporal relations.
Both cause-effect and temporal relations are rep-
resented by symptom chains. Hence, the similarity
of relations is measured by the similarity of symp-
tom chains. The main characteristic of a symptom
chain is that it retains the cause-effect or temporal
order of the symptoms within it. Therefore, the
order of the symptoms must be considered when
calculating the similarity of two symptom chains.
Accordingly, a
sequence kernel function (Lodhi et

al., 2002; Cancedda et al., 2003) is adopted to cal-
culate the similarity of two symptom chains. A
sequence kernel compares two sequences of sym-
bols (e.g., characters, words) based on the subse-
quences within them, but not individual symbols.
Thereby, the order of the symptoms can be incor-
porated into the comparison process.
The sequence kernel calculates the similarity of
two symptom chains by comparing their sub-
symptom chains at different lengths. An increasing
number of common sub-symptom chains indicates
a greater similarity between two symptom chains.
For instance, both the two symptom chains
1234
s
sss
and
321
s
ss
contain the same symptoms
1
s
,
2
s
and
3
s
, but in different orders. To calculate the

similarity between these two symptom chains, the
sequence kernel first calculates their similarities at
length 2 and 3, and then averages the similarities at
the two lengths. To calculate the similarity at
1027
length 2, the sequence kernel compares their sub-
symptom chains of length 2, i.e.,
12 13 14 23 24 34
{,, , , , }
s
sssssssssss
and
32 31 21
{,,}
s
sssss
.
Similarly, their similarity at length 3 is calculated
by comparing their sub-symptom chains of length
3, i.e.,
123 124 134 234
{ , , , }
s
ss sss sss sss
and
321
{}
s
ss
.

Obviously, no similarity exists between
1234
s
sss

and
321
s
ss
, since no sub-symptom chains are
matched at both lengths. In this example, the sub-
symptom chains of length 1, i.e., individual symp-
toms, do not have to be compared because they
contain no information about the order of symp-
toms. Additionally, the sub-symptom chains of
length 4 do not have to be compared, because the
two symptom chains share no sub-symptom chains
at this length. Hence, for any two symptom chains,
the length of the sub-symptom chains to be com-
pared ranges from two to the minimum length of
the two symptom chains. The similarity of two
symptom chains can be formally denoted as
12
12
12
2
(, ) ( , )
( , )
1
( , ),

1
NN
qd q d
NN
qd
N
NN
nq d
n
Sim r r Sim sc sc
Ksc sc
Ksc sc
N
=

=
=


(7)
where
1
N
q
s
c
and
2
N
d

s
c
denote the symptom chains
corresponding to
q
r and
d
r , respectively;
1
N
and
2
N
denote the length of
1
N
q
s
c
and
2
N
d
s
c
, respec-
tively;
( , )K ii
denotes the sequence kernel for
calculating the similarity between two symptom

chains;
( , )
n
K ii
denotes the sequence kernel for
calculating the similarity between two symptom
chains at length
n, and N is the minimum length of
the two symptom chains, i.e.,
12
min( , )
N
NN= .
The sequence kernel
12
(, )
NN
ni j
K
sc sc
is defined as
2
1
12
12
12
11 2 2
()
()
(, )

() ( )
()()
,
()() ()()
n
nn
N
N
nj
NN
ni
ni j
NN
ni n j
NN
ui u j
uSC
NN NN
uiuj ui uj
uSC uSC
sc
sc
Ksc sc
sc sc
sc sc
sc sc sc sc
φφ
φφ φφ

∈∈

Φ
Φ
=
ΦΦ
=

∑∑
i
(8)
where
12
(, )
NN
ni j
K
sc sc
is the normalized inner
product of vectors
1
()
N
ni
s

and
2
()
N
nj
s


;
(
)
n
Φ i

denotes a mapping that transforms a given symp-
tom chain into a vector of the sub-symptom chains
of length
n;
()
u
φ
i
denotes an element of the vector,
representing the weight of a sub-symptom chain
u
,
and
n
SC
denotes the set of all possible sub-
symptom chains of length
n. The weight of a sub-
symptom chain, i.e.,
()
u
φ
i

, is defined as
1
1
1
1 is a contiguous sub-symptom chain of
is a non-contiguous sub-symptom chain
()
with skipped symptoms
0 does not appear in ,
N
i
N
ui
N
i
usc
u
sc
usc
θ
λ
φ
θ



=





(9)
where
[0,1]
λ

denotes a decay factor that is
adopted to penalize the non-contiguous sub-
symptom chains occurred in a symptom chain
based on the skipped symptoms. For instance,
12 23
123 123
() ()1
ss s s
ss s ss s
φ
φ
=
=
since
12
s
s
and
23
s
s

are considered as contiguous in
123

s
ss
, and
13
1
123
()
ss
ss s
φ
λ
=
since
13
s
s
is a non-contiguous
sub-symptom chain with one skipped symptom.
The decay factor is adopted because a contiguous
sub-symptom chain is preferable to a non-
contiguous chain when comparing two symptom
chains. The setting of the decay factor is domain
dependent. If
1
λ
=
, then no penalty is applied for
skipping symptoms, and the cause-effect and tem-
poral relations are transitive. The optimal setting of
Figure 3. Illustrative example of relevance com-

p
utation using the sequence kernel function.
1028
λ
is determined empirically. Figure 3 presents an
example to summarize the computation of the
similarity between two symptom chains.
4 Experimental Results
4.1 Experiment setup
1) Corpus:
The consultation documents were
collected from the mental health website of the
John Tung Foundation ()
and the PsychPark (),
a virtual psychiatric clinic, maintained by a
group of volunteer professionals of Taiwan
Association of Mental Health Informatics (Bai
et al. 2001). Both of the web sites provide
various kinds of free psychiatric services and
update the consultation documents periodically.
For privacy consideration, all personal infor-
mation has been removed. A total of 3,650
consultation documents were collected for
evaluating the retrieval model, of which 20
documents were randomly selected as the test
query set, 100 documents were randomly se-
lected as the tuning set to obtain the optimal
parameter settings of involved retrieval models,
and the remaining 3,530 documents were the
reference set to be retrieved. Table 1 shows the

average number of events, symptoms and rela-
tions in the test query set.
2) Baselines: The proposed method, denoted as
Topic, was compared to two word-based re-
trieval models: the VSM and Okapi BM25
models. The VSM was implemented in terms
of the standard TF-IDF weight. The Okapi
BM25 model is defined as
(1)
3
1
2
3
(1)
(1)
|| ,
tQ
kqtf
ktf
avdl dl
wkQ
K
tf k qtf avdl dl

+
+

+
++ +


(10)
where
t denotes a word in a query Q; qtf and tf
denote the word frequencies occurring in a
query and a document, respectively, and
(1)
w

denotes the Robertson-Sparck Jones weight of
t (without relevance feedback), defined as
(1)
0.5
log ,
0.5
Nn
w
n
−+
=
+
(11)
where
N denotes the total number of docu-
ments, and
n denotes the number of documents
containing
t. In (10), K is defined as
1
((1 ) / ),
K

kbbdlavdl
=
−+⋅
(12)
where
dl and avdl denote the length and aver-
age length of a document, respectively. The
default values of
1
k ,
2
k ,
3
k and b are describe
in (Robertson et al., 1996), where
1
k
ranges
from 1.0 to 2.0;
2
k is set to 0;
3
k is set to 8,
and
b ranges from 0.6 to 0.75. Additionally,
BM25 can be considered as BM15 and BM11
when
b is set to 1 and 0, respectively.
3) Evaluation metric: To evaluate the retrieval
models, a multi-level relevance criterion was

adopted. The relevance criterion was divided
into four levels, as described below.
z Level 0: No topics are matched between a
query and a document.
z Level 1: At least one topic is partially
matched between a query and a document.
z Level 2: All of the three topics are partially
matched between a query and a document.
z Level 3: All of the three topics are partially
matched, and at least one topic is exactly
matched between a query and a document.
To deal with the multi-level relevance, the
dis-
counted cumulative gain (DCG)
(Jarvelin and
Kekalainen, 2000) was adopted as the evalua-
tion metric, defined as
[1], 1

[]
[ 1] [ ]/ log , otherwise
c
Gifi
DCG i
DCG i G i i
=


=


−+


(13)
where
i denotes the i-th document in the re-
trieved list;
G[i] denotes the gain value, i.e.,
relevance levels, of the
i-th document, and c
denotes the parameter to penalize a retrieved
document in a lower rank. That is, the DCG
simultaneously considers the relevance levels,
and the ranks in the retrieved list to measure
the retrieval precision. For instance, let
<3,2,3,0,0> denotes the retrieved list of five
documents with their relevance levels. If no
penalization is used, then the DCG values for
Topic Avg. Number
Negative Life Event 1.45
Depressive Symptom 4.40
Semantic Relation 3.35
Table 1. Characteristics of the test query set.
1029
the retrieved list are <3,5,8,8,8>, and thus
DCG[5]=8. Conversely, if
c=2, then the docu-
ments retrieved at ranks lower than two are pe-
nalized. Hence, the DCG values for the re-
trieved list are <3,5,6.89,6.89,6.89>, and

DCG[5]=6.89.
The relevance judgment was performed by
three experienced physicians. First, the
pooling
method (Voorhees, 2000) was adopted to gen-
erate the candidate relevant documents for
each test query by taking the top 50 ranked
documents retrieved by each of the involved
retrieval models, namely the VSM, BM25 and
Topic. Two physicians then judged each can-
didate document based on the multilevel rele-
vance criterion. Finally, the documents with
disagreements between the two physicians
were judged by the third physician. Table 2
shows the average number of relevant docu-
ments for the test query set.
4) Optimal parameter setting: The parameter
settings of BM25 and Topic were evaluated us-
ing the tuning set. The optimal setting of
BM25 were
k
1
=1 and b=0.6. The other two pa-
rameters were set to the default values, i.e.,
2
0k = and
3
8k = . For the Topic model, the
parameters required to be evaluated include the
combination factors,

α
and
β
, described in
(1); the constant const. described in (2), and
the decay factor,
λ
, described in (9). The op-
timal settings were
0.3
α
=
; 0.5
β
=
;
const.=0.6 and
0.8
λ
=
.
4.2 Retrieval results
The results are divided into two groups: the preci-
sion
and efficiency. The retrieval precision was
measured by DCG values. Additionally, a paired,
two-tailed
t-test was used to determine whether the
performance difference was statistically significant.
The retrieval efficiency was measure by the query

processing time, i.e., the time for processing all the
queries in the test query set.
Table 3 shows the comparative results of re-
trieval precision. The two variants of BM25,
namely BM11 and BM15, are also considered in
comparison. For the word-based retrieval models,
both BM25 and BM11 outperformed the VSM, and
BM15 performed worst. The Topic model
achieved higher DCG values than both the BM-
series models and VSM. The reasons are three-fold.
First, a negative life event and a symptom can each
be expressed by different words with the same or
similar meaning. Therefore, the word-based mod-
els often failed to retrieve the relevant documents
when different words were used in the input query.
Second, a word may relate to different events and
symptoms. For instance, the term "worry about" is
Relevance Level Avg. Number
Level 1 18.50
Level 2 9.15
Level 3 2.20
Table 2. Average number of relevant documents
for the test query set.

DCG(5) DCG(10) DCG(20) DCG(50) DCG(100)
Topic
4.7516
*

6.9298

7.6040
*
8.3606
*
9.3974
*

BM25 4.4624 6.7023 7.1156 7.8129 8.6597
BM11 3.8877 4.9328 5.9589 6.9703 7.7057
VSM 2.3454 3.3195 4.4609 5.8179 6.6945
BM15 2.1362 2.6120 3.4487 4.5452 5.7020
Table 3. DCG values of different retrieval models. * Topic vs BM25 significantly different (p<0.05)
Retrieval Model Avg. Time (seconds)
Topic 17.13
VSM 0.68
BM25 0.48
Table 4. Average query processing time of differ-
ent retrieval models.
1030
a good indicator for both the symptoms <Anxiety>
and <Hypochondriasis>. This may result in ambi-
guity for the word-based models. Third, the word-
based models cannot capture semantic relations
between symptoms. The Topic model incorporates
not only the word-level information, but also more
useful topic information about depressive problems,
thus improving the retrieval results.
The query processing time was measured using
a personal computer with Windows XP operating
system, a 2.4GHz Pentium IV processor and

512MB RAM. Table 4 shows the results. The topic
model required more processing time than both
VSM and BM25, since identification of topics in-
volves more detailed analysis, such as semantic
parsing of sentences and symptom chain construc-
tion. This finding indicates that although the topic
information can improve the retrieval precision,
incorporating such high-precision features reduces
the retrieval efficiency.
5 Conclusion
This work has presented the use of topic informa-
tion for retrieving psychiatric consultation docu-
ments. The topic information can provide more
precise information about users' depressive prob-
lems, thus improving the retrieval precision. The
proposed framework can also be applied to differ-
ent domains as long as the domain-specific topic
information is identified. Future work will focus on
more detailed experiments, including the contribu-
tion of each topic to retrieval precision, the effect
of using different methods to combine topic infor-
mation, and the evaluation on real users.
References
Baeza-Yates, R. and B. Ribeiro-Neto. 1999. Modern
Information Retrieval. Addison-Wesley, Reading,
MA.
Cancedda, N., E. Gaussier, C. Goutte, and J. M. Renders.
2003. Word-Sequence Kernels. Journal of Machine
Learning Research, 3(6):1059-1082.
Fellbaum, C. 1998. WordNet: An Electronic Lexical

Database. MIT Press, Cambridge, MA.
Hamilton, M. 1960. A Rating Scale for Depression.
Journal of Neurology, Neurosurgery and Psychiatry,
23:56-62
Jarvelin, K. and J. Kekalainen. 2000. IR Evaluation
Methods for Retrieving Highly Relevant Documents.
In Proc. of the 23rd Annual International ACM
SIGIR Conference on Research and Development in
Information Retrieval, pages 41-48.
Lodhi, H., C. Saunders, J. Shawe-Taylor, N. Cristianini,
and C. Watkins. 2002. Text Classification Using
String Kernels. Journal of Machine Learning Re-
search, 2(3):419-444.
Okabe, M., K. Umemura and S. Yamada. 2005. Query
Expansion with the Minimum User Feedback by
Transductive Learning. In Proc. of HLT/EMNLP,
Vancouver, Canada, pages 963-970.
Pagano, M.E., A.E. Skodol, R.L. Stout, M.T. Shea, S.
Yen, C.M. Grilo, C.A. Sanislow, D.S. Bender, T.H.
McGlashan, M.C. Zanarini, and J.G. Gunderson.
2004. Stressful Life Events as Predictors of Function-
ing: Findings from the Collaborative Longitudinal
Personality Disorders Study. Acta Psychiatrica Scan-
dinavica, 110: 421-429.
Robertson, S. E., S. Walker, S. Jones, M. M. Hancock-
Beaulieu, and M.Gatford. 1995. Okapi at TREC-3. In
Proc. of the Third Text REtrieval Conference (TREC-
3), NIST.
Robertson, S. E., S. Walker, M. M. Beaulieu, and
M.Gatford. 1996. Okapi at TREC-4. In Proc. of the

fourth Text REtrieval Conference (TREC-4), NIST.
Voorhees, E. M. and D. K. Harman. 2000. Overview of
the Sixth Text REtrieval Conference (TREC-6). In-
formation Processing and Management, 36(1):3-35.
Wu, C. H., L. C. Yu, and F. L. Jang. 2005a. Using Se-
mantic Dependencies to Mine Depressive Symptoms
from Consultation Records. IEEE Intelligent System,
20(6):50-58.
Wu, C. H., J. F. Yeh, and M. J. Chen. 2005b. Domain-
Specific FAQ Retrieval Using Independent Aspects.
ACM Trans. Asian Language Information Processing,
4(1):1-17.
Wu, C. H., J. F. Yeh, and Y. S. Lai. 2006. Semantic
Segment Extraction and Matching for Internet FAQ
Retrieval. IEEE Trans. Knowledge and Data Engi-
neering, 18(7):930-940.
Yeh, J. F., C. H. Wu, M. J. Chen, and L. C. Yu. 2004.
Automated Alignment and Extraction of Bilingual
Domain Ontology for Cross-Language Domain-
Specific Applications. In Proc. of the 20th COLING,
Geneva, Switzerland, pages 1140-1146.
Yu, L. C., C. H. Wu, Yeh, J. F., and F. L. Jang. 2007.
HAL-based Evolutionary Inference for Pattern Induc-
tion from Psychiatry Web Resources. Accepted by
IEEE Trans. Evolutionary Computation.
1031

×