Tải bản đầy đủ (.pdf) (14 trang)

Stimulating critical thinking in a virtual learning community with instructor moderations and peer reviews

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (348.51 KB, 14 trang )

Knowledge Management & E-Learning: An International Journal, Vol.3, No.4.

534

Stimulating Critical Thinking in a Virtual Learning
Community with Instructor Moderations and Peer Reviews
Ke Zhang*
College of Education
Wayne State University, USA
E-mail:

Sacip Toker
College of Education
Wayne State University, USA
E-mail:
*Corresponding author
Abstract: This mixed methods study investigated the dynamic impacts of
instructor moderations and peer reviews on critical thinking (CT) in a virtual
learning community. Multiple data sets were collected from online discourses,
participants’ written reflections, and learning artifacts, and analyzed and
triangulated with both quantitative and qualitative methods. Both instructor
moderations and peer reviews had great impacts on learner’s CT in multiple
ways, and stimulated CT development throughout the semester. As learners
grew with more CT skills, the needs for instructor moderations decreased; yet
peer reviews peaked in terms of quantity, length, and depth of discussions. Peer
reviews in this study also demonstrated effective questioning patterns, which
were positively accepted by students being questioned or criticized, and
resulted in changes and improvements in the final learning artifacts. Practical
implications for online teaching and learning and community building are
discussed, together with suggestions for future research.
Keywords: Critical Thinking; Peer Review; Instructor Moderation; Virtual


Learning Community
Biographical notes: Ke Zhang is an Associate Professor in the highly regarded
Instructional Technology Program at Wayne State University. Previously, she
was an assistant professor at Texas Tech University in 2003-2006. She received
her Ph.D. in Instructional Systems from the Pennsylvania State University with
a minor in Business Administration. Dr. Zhang has professionally consulted in
instructional technology, organizational development, training and
development with clients such as Siemens, Procter & Gamble, Pepsi, Otis,
public schools, medical schools, and governments. She has dozens of refereed
publications on online learning, emerging learning technologies, problem
solving, and mobile learning. Her popular book “Empowering Online Learning:
100+ Activities for Reading, Reflecting, Displaying, and Doing” was published
by Jossey-Bass and is to be translated into Chinese soon.
Sacip Toker is a Ph.D. candidate in Instructional Technology in the College of
Education at Wayne State University, Detroit, Michigan. In his native Turkey,
he was involved in several on-line training programs and provided instructional
technology support to the faculties at Middle East Technical University. He


Knowledge Management & E-Learning: An International Journal, Vol.3, No.4.

535

also taught information technology and instructional planning courses at
Suleyman Demirel University. He is the co-author of book chapters and
scholarly articles. He is also a member of ISPI and a presenter at Association
for Educational and Communication Technology, American Educational
Research Association, and International Society for Performance Improvement
conferences.


1. Introduction
It is important yet challenging to stimulate critical thinking (CT) in virtual learning
communities. Instructor moderations are crucial in online learning to promote critical
thinking, especially with ill-structured tasks (Salmon, 2011; Zhang & Ge, 2006). Peer
interactions provide emotional and motivational supports (De Simone, Lou, & Schmid,
2001; Wu, Farrell, & Singley, 2002), which are essential in virtual community building
efforts. Through cognitive elaborations and distributed cognition, peer discussions also
promote critical thinking in online communities (Abrami, Chambers, Poulsen, De Simone,
d’Apollonia, & Howden, 1995; Lund, 2004; Renzi & Klobas, 2000; Salomon, 1993;
Soller, Linton, Goodman, & Lesgold, 1999). Peer reviews may help develop
metacognitive learning skills (Livermore, 2005; Lou, Dedic & Rosenfield, 2003; Soller et
al., 1999; Wu et al.).
When building virtual learning communities, instructor moderations and peer
interactions are particularly important. Perkins and Murphy’s study on CT in online
asynchronous discussions identified four major indicators of CT: clarifications,
assessments, inferences, and strategies (2006). They suggested that instructors should
develop strategies for online discussions and arrange weight of CT steps based on desired
learning outcomes. In a content analysis of online discussions in an applied educational
psychology course, Hara and colleagues found that online discussions were more of oneway than two-way interactions because the discussions were mostly for the sake of grade
(Hara, Bonk, & Angeli, 2000). Even though discussions later in the semester became
more continuous and engaging, overall the percentages of cognitive and metacognitive
skills reflected in online discourses were very low (Hara, Bonk, & Angeli). Meyer’s
study on CT further extended Garrison et al (2001)’s CT model by adding a social
category, which confirmed the importance of peer interactions in promoting higher order
thinking in online learning (Meyer, 2003). Schrire (2006) examined knowledge building
in asynchronous discussion groups, integrating Garrison et al’s Practical Inquiry Model
(2001), Bloom’s taxonomy of educational objectives for the cognitive domain, and the
SOLO taxonomy (Biggs & Collis, 1982).
Koops, Van der Vleuten, De leng, Oei, and Snoeckx (2011) investigated the
impacts of formative peer feedback on students paper preparation in an online medical

course, and found that peer review helped improve learners’ medical knowledge and
scientific reasoning. Şendağ and Odabaşı (2009) investigated how undergraduate
students’ CT skills and content knowledge acquisition were influenced in an online
problem-based learning (PBL) environment in a pre-test and post-test control group
experimental study. The results indicated that learning in online PBL did not significantly
affect content knowledge acquisition. Nevertheless, students’ CT skills progressed in
PBL. Osborne and colleagues (Osborne, Kriese, Tobey, & Johnson, 2009) assessed the
progress of students’ CT skills through online discussions in an undergraduate course.
Students who showed higher-level CT skills in this study (Osborne et. al.) also


536

K. Zhang, S. Toker (2011)

demonstrated understanding and appreciative behaviors, which helped building a friendly
and supportive community by addressing affective and social needs of community
members. Tlhapane and Simelane’s study (2010) found that students in technologyenhanced PBL outperformed those in regular PBL in CT, problem solving, and social
skills in a nursing education program. Teo and Chai (2009) found that online peer
critiquing among pre-service teachers facilitated the professional development progress
from novice to experts, and promoted multiple perspectives in problem solving.
However it was yet to be understood how instructor moderations and peer reviews
may dynamically guide and scaffold the process, during which online learners grow with
CT skills in virtual learning communities. This study investigated the dynamic impacts of
instructor moderations and peer reviews on CT.

2. Research Method
The mixed methods research (Creswell, 2003, 2004; Creswell & Clark, 2007)
investigated the dynamic impacts of instructor moderations and peer reviews on CT in an
online graduate course in the United States. Multiple data sets were collected through

online observation, online discourse, participants’ written reflections, and students’
learning artifacts, and analyzed and triangulated with both quantitative and qualitative
methods (Creswell; Creswell & Clark). Qualitative and quantitative analyses were
conducted with multiple raters and other techniques to ensure trustworthiness (Creswell,
2003; Denzin & Lincoln, 2005; Lincon & Guba, 1985; Yin, 2009).

2.1. Context, Participants, and Procedure
15 graduate students completed the study in a 4-credit online graduate course.
Participants had varied professional backgrounds in K-12, higher education, military,
healthcare education, and corporate in roles as teachers, trainers, instructors,
administrators, media specialists, or instructional designers. The course was built to foster
a collaborative, constructive, adult learning community (Conrad, 2002; Riel & Polin,
2004), with five weekly units and one two-week unit. Major learning tasks (e.g., online
discussions, peer reviews, critical friends activities (Bonk & Zhang, 2008) aimed to
promote CT skill development via knowledge mastery as well as problem solving (PS)
processes. The individual research paper assignment was an ill-structured PS task
throughout the semester. Open-ended, topic-specific discussions in each unit aimed to
provoke CT in content-specific knowledge mastery processes. Instructor moderations
were throughout the semester, and peer reviews were required on the research paper
drafts. Instructor and fellow students provided feedback to members of the community in
unit learning activities, and on research papers at different stages (i.e., initial ideas,
proposal, outline, and draft). Students were encouraged to make revisions after each
round of instructor or peer review. All participants completed a final reflection paper at
the end of the semester.

2.2. Data Collection and Analyses
Various types of data were collected, including students’ online discourses, courserelated email communications with the instructor, research paper proposals, outlines,
drafts, final submission with revisions, peer and instructor’s feedback as text or MP3
audio files, and instructor’s constant reflective notes during the semester. Instructor’s
MP3-format feedback was transcribed to plain text. All textual data were read, discussed,



Knowledge Management & E-Learning: An International Journal, Vol.3, No.4.

537

categorized, coded, recoded by both researchers and analyzed in collaborative efforts of
both researchers. All online discourse data were read and re-read by both researchers and
then coded and recoded with open coding technique. The researchers then jointly
discussed the codes, combined or deleted some of them and then recoded the documents
thoroughly with an agreed list of codes, definition and examples.

Figure 1. Data analyses process flowchart


538

K. Zhang, S. Toker (2011)
Table 1. Synthesized Taxonomies for Further CT Coding and Analysis

Bloom’s Taxonomy

Critical Thinking Skills

The SOLO Taxonomy

6. Self-Regulation
Self-examination
Self-correction
5. Explanation

a.
b.

6. Evaluation

a.
b.
c.

Stating Results
Justifying Procedures
Presenting Arguments

4. Inference
Querying Evidence
Conjecturing Alternatives
Drawing Conclusions
3. Evaluation
a.
b.
c.

a.
b.

Assessing Claims
Assessing Arguments

5. Synthesis


2. Analysis

4. Analysis

a.
b.
c.

3. Application
2. Comprehension

1. Knowledge

Examining Ideas
Identifying Arguments
Analyzing Arguments

1. Interpretation
a.
b.
c.

Categorization
Decoding Significance
Clarifying Meaning

5. Extended abstract level
students make
connections beyond
the immediate subject

area
b. students generalize and
transfer the principles
from the specific to the
abstract
4. Relational level
a.

students demonstrate
the relationship
between connections
b. students demonstrate
the relationship
between connections
and the whole
3. Multistructural
a.

students make a
number of connections
b. the significance of the
relationship between
connections is not
demonstrated
2. Unistructural
a.

students make simple
and obvious
connections

b. the significance of the
connections is not
demonstrated
1. Pre-structural
a.

a. students are

acquiring
pieces of unconnected
information
b. no organization
c. no overall sense
Coding of the online discourses included two aspects in this study. One was to
categorize content or main message in the textual data being analyzed, and the other was
to identify the sequence and levels of the discourse in the communications. Thus


Knowledge Management & E-Learning: An International Journal, Vol.3, No.4.

539

discourses were coded with two sets of codes: letter codes for content analysis and
numerical for sequence and depth identification. Letter codes identified the categories of
moderations and CT indicators, together with an identifier of types of message (“P” for
original messages and “R” for responses). Numerical codes reflected the sequence and
depth level of the discourse. For example, a message coded as “2.a.3. ELB-R”
summarized the following information: the message was a level 2 response (“R”) as a
third (3) one at this level, and it was categorized as elaboration (ELB). Using discourses
in forms of questions and answers as an example, all of those either in original posts (P)

or responses (R) were organized as follows:
Administrative/Managerial (ADM-P, ADM-R)
Deeper understandings:
 Elaboration (ELB-P, ELB-R)
 Explanation/clarification (EXP-L, EXP-R)
 Extension / addition (EXT-P, EXT-R)
Challenging a Position:









Search (SER-P, SER-R)
Supporting materials/evidences/data (DAT-P, DAT-R)
Why (WHY-P, WHY-R)
What (WHAT-P, WHAT-R)
When (WHEN-P, WHEN-R)
How (HOW-P, HOW-R)
Who (WHO-P, WHO-R)
Which (WHC-P, WHC-R)

Later, based on Garrison et al’s Practical Inquiry Model (2001), Bloom’s
taxonomy of cognitive learning objectives, and the SOLO taxonomy (Biggs & Collis,
1982), all discourse with CT indicators were further coded to help understand the CT
skill development progress across time.


3. Findings
All textual discourses identified as related to CT in the initial coding processes were
further analyzed. A depth of discussion (DD) value was generated for each individual
participant by topic/thread, integrating the total number of messages in such thread/topic
and the level of conversation. For example, if learner A posted 2 Level-1 messages, 4
Level-2, and 3 Level 4 posts, then the value of his/her DD was calculated as: (2x1) +
(4x2) + (3x4) = 22. This formula reflected that each level of conversation increased the
depth of the discussion in the threaded messages.
A repeated measure of ANOVA (Field, 2005) was performed for the following
three variables: (a) total number of messages (N) , (b) the length of messages in words
(L), and (c) the depth of discussion values (DD). Mauchly’s Test of Sphericity was
performed in order to ensure the sphericity assumption (Field, 2005), with the following
results: for N: χ2 (5, 14) = 11.288, p > .05; for L: χ2 (5, 14) = 11.288, p > .05; and for
DD: χ2 (5, 14) = 11.288, p > .05. None of them produced significant results; and the
sphericity assumption was not violated for any of these variables. Thus, the withinsubjects sphericity assumed results were used for further interpretations. The results, as
summarized in Table 2 showed significant changes among the units. For all the three


540

K. Zhang, S. Toker (2011)

variables, there was a statistically significant effect of weekly discussions. N and DD had
moderate effect size, while L had a large effect size value. To reveal which factor may
have caused such results, each unit was compared to its consecutive unit. Table 3
summaries the comparison results.
Table 2. Summary of Individual’s CT Messages
Topic/
thread
Initial Ideas

Unit 2
Unit 3
Draft Share
& Review
Unit 4
Unit 5
F (5, 65)
Partial η2
* p < .01

Total Number of
Messages Per Learner
M
1.79
4.86
5.50

SD
1.19
2.11
4.20

M
129.86
794.93
848.43

SD
89.76
382.71

535.57

Depth of
Discussion (DD)
value
M
SD
3.86
4.07
16.64
10.43
15.36
13.86

4.79

1.93

512.21

277.25

12.43

10.48

4.36
4.07

2.59

2.27

638.93
315.52
660.21
318.71
15.681*
.55

9.07
9.00

7.09
6.26

5.285*
.29

Length of the
Messages (in words)

5.33*
.29

Table 3. CT discourse comparison among units
Length of the
Depth of the
Messages in words
Discussions (DD)
(L)

1 vs. 2
F(1, 13) = 27.284,
F(1, 13) = 50.429,
F(1, 13) = 21.711,
p < .01
p < .01
p < .01
2 vs. 3
3 vs. Drafts review
F(1, 13) = 9.249,
p < .01
Drafts review vs. 4
4 vs. 5
Note: Dashes indicate that there is no significant difference between units.
Topics/Threads

Number of the
Messages (N)

The comparisons demonstrated that there was a significant difference between
Unit 1 and Unit 2 in total number of messages, in favor of Unit 2. Individual’s total
number of CT messages increased significantly from unit 1 to unit 2 (effective size
was .82), and then remained with no significant changes from unit to unit in the rest of
the semester. Similar results were found for the depth of discussions as well, which had
an effect size of .79, again indicating a high effect. Likewise, depth of discussions
increased significantly in unit 2 compared to unit 1, and remained with no significant
changes in the rest of the semester. At the same time, the length of messages increased
significantly in Unit 2 compared to Unit 1 (Effect size =.89) and during peer review of
paper drafts, compared to Unit 3 (Effect size = .64).
In addition, K-related sample Friedman non-parametric test was conducted to

analyze learners’ responding messages. The test results demonstrated that there was a
significant difference among the number of responses, χ2 (5, 17) = 17.755, p < .05.


Knowledge Management & E-Learning: An International Journal, Vol.3, No.4.

541

Table 4. Mean Rank of Responding Messages by Topic
Topic/thread

Mean Rank

Unit 1

2.15

Unit 2

4.47

Unit 3

3.85

Unit 4

3.29

Peer Review


3.76

Unit 5

3.47

Wilcoxon Signed-Rank Test indicated that In Units 2, 3, 4, 5 and Peer Review (n
= 66, 66, 46, 44, and 51 respectively), students sent more responding messages than in
Unit 1 (n = 16).
Table 5. Wilcoxon Signed-Rank Test Results of Responding Messages by topic
Z

p

Unit 2 – Unit 1

-3.122

.002

Unit 3 – Unit 1

-2.883

.004

Unit 4 – Unit 1

-2.408


.016

Unit 5 – Unit 1

-2.568

.010

Unit 5 – Unit 2

-2.394

.017

Peer Review – Unit 1

-2.994

.003

These findings indicate that learners may have learned in earlier unit discussions,
through instructor moderations; and as a result, demonstrated increased CT skills in the
rest of the semester. Such open-ended, topic-specific online discussions may have
prepared learners with necessary CT skills and stimulated them to apply such skills in the
research paper tasks.
The following sections report more findings of qualitative and quantitative
analyses to build a deeper understanding of the dynamic impacts of instructor
moderations and peer reviews on CT.


3.1. How Did Instructor Moderations Stimulate CT in a Virtual Learning
Community?
Instructor moderations were mostly motivational, cognitive, and meta-cognitive
throughout the semester. During the first two weeks, the needs for instructor moderation
were obvious and much greater compared to the rest of the semester. The needs for
instructor moderations decreased later in the semester, and did not peak even in the


542

K. Zhang, S. Toker (2011)

research paper draft review week, probably because peer reviews were sufficient and
supportive. At the earlier stage, well-structured instructions served as a powerful
cognitive tool, which helped learners to organize their thoughts or to position their
different views. It was also noted, however, as learning progressed, later in the semester,
less-structured instructions actually stimulated more critical thinking, as reflected in
learners’ online discourse, and documented in their final reflections. Evidently, learners
CT skills developed and they became better able to apply CT in learning tasks.
In many cases, instructor’s constructive feedback had led to noticeable
improvements in the research progress, which were well reflected in the changes and
revisions made in the research paper, and documented in their final reflections as well.
For example, when the initial research ideas were shared online, 3 out of the 17 enrolled
students emailed the instructor in private, expressing concerns about this major PS task.
Many ideas were either vague or too general. The instructor provided timely feedback in
written text as well as recorded audio responses. For example, Diane originally proposed
the following:
“This research paper will be a conceptual paper that will present a
snapshot of what the literature says about adult learners and how
that knowledge can be applied to Distance Education (DE)

instructional design.”
After two rounds of review and revisions working with the instructor, she later
redefined her research paper to focus on selected instructional design models to address
senior learners in online learning.
As online learners mature in the learning process, they became more critical and
reflective, and demonstrated more CT skills. They had significantly fewer questions for
the instructor and at the same time, they provided more critical and constructive feedback
for their peers in the second half of the semester, especially after the peer review activity.
It was evident that learners CT improved as reflected in the learning outcome artifacts:
from their initial ideas to the revised ideas and outline, then to the first complete draft,
and then to the final submission. Their comments in the final reflection papers confirmed
such learning benefits of instructor moderations. The qualitative findings are consistent to
the previously reported quantitative results, both indicated positive changes in individual
learner’s CT.

3.2. How Did Peer Reviews Stimulate CT in the Virtual Learning
Community?
Learners in this course were assigned with critical friends from the class, who reviewed
their open-ended, topic-specific discussions and the research paper drafts. Learners had
different critical friends in different learning activities so that they could get to know one
another better throughout the semester. Peer reviews occurred during unit discussions and
were most intense during the research paper draft review and revision process. One
student consistently reviewed the work of those not assigned as critical friends in the
community. Five more students reviewed or commented on peers not assigned as their
critical friends in some learning activities but not all of them.
Peer reviews stimulated critical thinking in multiple ways. Peer reviews were
most effective when reviewers provided detailed suggestions, offered additional
resources, or raised provoking questions, or challenged a position in the paper being
reviewed. Questions in peer review were either for clarifications or provoking or
challenging a position. The latter ones were noticeably more in the peer review processes.



Knowledge Management & E-Learning: An International Journal, Vol.3, No.4.

543

When peer reviewers asked provoking questions or challenged a position, their fellow
students were more likely to respond critically, demonstrating CT skills. For example,
peer review discussions generated the most provoking questions (n=51) compared to
other content-specific topics (Unit 1: n =18; Unit 2: n = 33; Unit 4 : n =13; Unit 5: n =16).
Most questions for clarifications were addressed in the online discussions and
resulted in minor revisions of the final paper. When peer reviewers provided additional
resources, their fellow students integrated those in their final revisions by adding new
references. For example, Wendy, a peer reviewer asked: “How should these online
courses be designed for effective instruction?” In response, DeAnne added substantial
writings in the revised paper with practical examples from University of Texas at
Brownsville and Texas Southmost College nursing programs, as well as selected online
learning models to support them. Wendy also questioned whether or not distance
education is appropriate for nursing education. And DeAnne later in her revisions added
the following to support her position:
According to the Journal of Nursing Education, in recent years
the number of courses and full-degree programs offered via the
World Wide Web has increased tremendously (p 364).
Universities and colleges are taking advantage of networking
technology to provide flexibility for students in nursing programs.
Students who have the Internet can access resources from
learning institutions to pursue degrees and certifications.
Distance education allows students to take courses at their own
time and allows the flexibility many nursing students desire when
taking courses. Distance education also improves access to

education and, in doing so, counters the shortage of nurses
(AACN 2000).
Distance education also helps to counter the nations mounting
nursing shortage by bringing nursing careers to people who
wouldn’t otherwise follow that path because they lack access to a
campus, or because work, family, or economic considerations
preclude a full-time, on-site education (AACN 2006).
Peer reviewer’s initial messages were constructed in the following three patterns:
(a) Positive – Disagreement – Questions or Suggestions (PDQS); (b) Quotation or
Reference – Discussion (Q/RD); and (c) Positive – What I learned - Suggestions or
Questions (PWilS/Q).
PDQS: In this pattern, peer reviewers started with positive aspects of what they
had reviewed, then followed by a statement of disagreement, then more provoking
questions and suggestions for revisions.
Q/RD: In this pattern, students used direct citations or paraphrased texts from the
reviewed paper to start the conversation, followed by their thoughts and comment
accordingly, as the following example:
“The online environment lacks nonverbal cues such as facial
expressions, tone of voice, body language, posture and gestures.
Along with the lack of nonverbal cues, the absence of immediate
feedback makes it hard to tell if others have understood a
message”--- I hadn’t thought much about the fact of
miscommunication as it relates to culture and language. In my
opinion, this would be especially true for non-adult learners. Due


544

K. Zhang, S. Toker (2011)
to this fact, I think synchronous DE is much more effective than

asynchronous because learners may have “telephone”
conversations. Previously, I associated a cultural online learning
environment with a group of learners who spoke the same
language, yet had differing cultural backgrounds. (ie different
cultures in the US) This probably due to the fact that most of my
research has been on DE at the K-12 level. Do you know of any
cross-cultural DE courses at that level? Did you find a certain
genre of CC DE? (i.e. university, corporate, etc.)”

PWilS/Q: One student came up with this structure to organize her reviews, which
was then modelled by others in the learning community.


What I learned (from your paper)



Questions (I have)



Suggestions



Additional related Literature

In their suggestions, peer reviewers were not shy to suggest adding or deleting
topics or content, with strong justifications for such suggestions. Another student adopted
a similar template to organize his reviews:



What surprised me



Why questions



General comments and observations

Responses to peer reviews, as reflected in the final research paper fell into the
following categories: (1) changes made only in response to instructor’s comments (n=2),
(2) Minor changes (n=6), (3) major changes (n=7). For example, Andrew added the
references suggested by peer reviewer, Dena, and integrated them into the paper.
Cambrie asked a clarification question about a term, which led Kate to revise her use of
the term throughout in the paper. Cases with major changes received a big number of
provoking questions, demanding more search and research efforts. These questions
stimulated in-depth discussions, critical reflections and resulted in substantial revisions or
changes in the final version of papers.

4. Discussions
This study was conducted in a course particularly focusing on promoting CT and learning
community building efforts. Evidently, both instructor Cmoderations and peer reviews
have stimulated and promoted critical thinking in the virtual learning community.
Instructor moderations resulted in noteworthy improvements in the learning outcomes
(e.g., the final research papers) and the learning processes (e.g., reflections, unit activities,
etc.). Moderations addressed affective, motivational, cognitive and meta-cognitive needs
to stimulate CT and facilitate community-building efforts. Learners’ CT skills developed

through the various activities; and as they grew with CT skills they were able to apply the
higher order thinking skills in the learning process and to build the virtual learning
community. Future research may look into how open-ended, content-specific discussions,
which are relatively well-structured problems, may help prepare learners to handle ill
structured problems, such as an independent research project. More in-depth research is


Knowledge Management & E-Learning: An International Journal, Vol.3, No.4.

545

necessary to investigate how to strategically schedule different types of learning activities
as well as various interventions (e.g., instructor’s moderations and peer reviews, etc.) to
stimulate individual CT, and to promote CT among the virtual learning communities.
Peer reviews in this study showed considerable impacts on students’ knowledge
development (Koops et al, 2011) as reflected in content-specific discussions as well as
their PS outcome, the research papers. In addition, they also helped develop
metacognitive learning skills in the learning community (Livermore, 2005; Lou, Dedic &
Rosenfield, 2003; Soller et al., 1999; Wu et al.). Questioning is a powerful strategy in
stimulating CT. Throughout the semester, online discourses regarding content-specific
topics or research papers were all dominant by questions and responses, which effectively
stimulated CT and further promoted the collaborative dynamics in the virtual learning
community. Similarly, Chan, Hew & Cheung (2009) found that questioning techniques
resulted in extended online discussion threads. In this study, students utilized different
ways or patterns to provide constructive peer reviews, including disagreements,
challenging positions, suggesting additional topics, and sharing relevant resources. When
challenging a position, the well-responded patterns all started with a positive statement,
or a direct quotation of the reviewed draft.
Effective and friendly questioning techniques, as those demonstrated in this study,
are essential in building collaborative virtual learning communities, because they not only

address the cognitive demands but also provide motivational and emotional support (De
Simone et al, 2001; Hew & Cheung, 2009; Wu et al., 2002; Zhang & Ge, 2006; Salmon,
2011). Both students and instructors would greatly benefit from effective uses of
questioning techniques, especially in online learning. Thus more research is needed to
further investigate specific questioning strategies and techniques as related to learner’s
CT and virtual learning communities.
This study was conducted in a virtual learning community, where members
utilized email, blogs, podcasts, threaded discussion boards, and instant messages for
communications, collaborations and community building. With more emerging
technologies, especially the social networking technologies, instructors and online
learners now have more media options, as well as opportunities for more dynamic,
versatile, and personalized learning (e.g., Bonk & Zhang, 2008; Zhang & Bonk, 2008).
Similar moderation and peer review strategies may apply to virtual learning communities
implementing more innovative technologies. However, more research is vital to guide the
practice to stimulate CT and for building collaborative, virtual learning communities in
the new era.

References
1.

2.
3.

Abrami, P.C., Chambers, B., Poulsen, C., De Simone, C., d'Apollonia, S., &
Howden, J. (1995). Classroom connections: Understanding and using cooperative
learning. Toronto, Ontario, Canada: Harcourt-Brace.
Bonk, C.J., & Zhang, K. (2008). Empowering Online Learning: 100+ Activities for
Reading, Reflecting, Displaying, and Doing. San Francisco, CA: Jossey-Bass.
Biggs, J.B., & Collis, K.F. (1982). Evaluating the Quality of Learning: The Solo
Taxonomy: Structure of the Observed Learning Outcome. New York City, NY:

Academic Press.


546
4.

5.
6.
7.
8.
9.
10.

11.
12.

13.
14.

15.

16.

17.
18.

19.

20.
21.


K. Zhang, S. Toker (2011)
Chan, J.C.C., Hew, K.F., & Cheung, W.S. (2009). Asynchronous online discussion
thread development: examining growth patterns and peer-facilitation techniques.
Journal of Computer Assisted Learning, 25(5), 438-452.
Conrad, D. (2002). Deep in the hearts of learners: Insights into the nature of online
Community. Journal of Distance Education, 17(1), 1-19.
Creswell, J.W. (2003). Research design: Qualitative, quantitative, and mixed
method approaches. Thousand Oaks, CA: Sage Publications.
Creswell, J.W. (2005). Educational Research: Planning, Conducting, and
Evaluating Quantitative and Qualitative Research. New Jersey: Prentice Hall.
Creswell, J.W., & Clark, V.L.P. (2007). Designing and conducting mixed methods
research. Thousand Oaks, CA: Sage Publications.
Denzin, N.K., & Lincoln, Y.S. (2005). Handbook of qualitative research (3rd ed.).
Thousand Oaks, CA: Sage Publications.
De Simone, C., Lou, Y., & Schmid, R.F. (2001). Meaningful and interactive
distance learning supported by the use of metaphor and synthesizing activities.
Journal of Distance Education, 16(1), 85–101.
Hara, N., Bonk, C.J., & Angeli, C. (2000). Content analysis of online discussion in
an applied educational psychology course. Instructional Science, 28(2), 115 – 152.
Koops, W., Van der Vleuten, C., De leng, B., Oei, S.G., & Snoeckx, L. (2011).
Computer-supported collaborative learning in the medical workplace: Students'
experiences on formative peer feedback of a critical appraisal of a topic paper.
Medical Teacher, 33(6), e318-e323. doi: doi:10.3109/0142159X.2011.575901
Lincoln, Y., & Guba, E.G. (1985). Naturalist Inquiry. Newbury Park, CA: Sage
Publications.
Livermore, C.R. (2005). The real challenge of computer-supported collaborative
learning: how do we motivate all? In T.S. Roberts (Ed.), Computer Supported
Collaborative Learning in Higher Education (pp. 162 – 171). Hershey PA: Idea
Group Publishing.

Lou, Y., Dedic, H., & Rosenfield, S. (2003). Feedback model and successful elearning. In S. Naidu (Ed.), Learning and teaching with technology: Principles and
practice (pp. 249-260). Sterling, VA: Kogan Page.
Lund, K. (2004). Human support in CSCL: What, for whom and by whom? In J.-W.
Strijbos, P.A. Kirshner, R.L. Martens, & P. Dillenbourg (Eds.), What we know
about CSCL and implementing it in higher education, CSCL (Vol. 3, pp. 167–198).
Norwell: Kluwer Academic Publishers.
Meyer, K.A. (2003). Face-to-face versus threaded discussions: the role of time and
higher-order thinking. JALN, 7(3), 55-65.
Osborne, R.E., Kriese, P., Tobey, H., & Johnson, E. (2009). Putting It All Together:
Incorporating" SoTL Practices" for Teaching Interpersonal and Critical Thinking
Skills in an Online Course. Insight (Parkville, Mo.), 4, 11.
Perkins, C., & Murphy, E. (2006). Identifying and measuring individual
engagement in critical thinking in online discussions: An exploratory case study.
Educational Technology & Society, 9 (1), 298-307.
Renzi, S., & Klobas, J.E. (2000), Steps toward computer supported collaborative
learning in large classrooms. Educational Technology & Society, 3(3), 317-328.
Riel, M., & Polin, L. (2004). Online learning communities: Common ground and
critical differences in designing technical environments. In Barab, R. Kling & J.


Knowledge Management & E-Learning: An International Journal, Vol.3, No.4.

22.

23.
24.
25.

26.


27.

28.

29.

30.
31.

32.

33.

547

Gray H. (Eds.), Designing for virtual communities in the service of learning (pp.
16–50). Cambridge: Cambridge University Press.
Salomon, G. (1993). No distribution without individual cognition: a distributed
interactionist view. In G. Salomon (ed.), Distributed Cognition: Psychological and
Educational Considerations. New York: Cambridge University Press, 111–138.
Salmon, G. (2011). E-moderating: The key to teaching and learning online (3rd ed).
London: Routledge Falmer.
Schrire, S. (2005). Knowledge-building in asynchronous discussion groups: going
beyond quantitative analysis. Computers & Education, 46 (1), 49–70.
Şendağ, S., & Odabaşı, H.F. (2009). Effects of an online problem based learning
course on content knowledge acquisition and critical thinking skills. Computers &
Education, 53(1), 132-141. doi: 10.1016/j.compedu.2009.01.008
Soller, A., Linton, F., Goodman, B., & Lesgold, A. (1999). Toward intelligent
analysis and support of collaborative learning interaction, in Vivet, & Lajoie (eds.),
Proceedings of AIED’99. Le Mans: IOS Press, 75-82.

Teo, Y.H., & Chai, C.S. (2009). Scaffolding online collaborative critiquing for
educational video production. Knowledge Management & E-Learning: An
International Journal, 1(1), 51-66.
Tlhapane, S.M., & Simelane, S. (2010). Technology-enhanced problem-based
learning methodology in geographically dispersed learners of Tshwane University
of Technology. Knowledge Management & E-Learning: An International Journal,
2(1), 68-83.
Vlachopoulos, P., & Cowan, J. (2010). Choices of approaches in e-moderation:
Conclusions from a grounded theory study. Active Learning in Higher Education,
11(3), 213-224. doi: 10.1177/1469787410379684
Yin, R. (2009). Case study research: Design and methods (4th ed.). Beverly Hills,
CA: Sage Publishing.
Wu, A., Farell, R., & Singley, M. (2002). Scaffolding group learning in a
collaborative network environment. In G. Stahl, Computer support for
collaborative learning: foundations for a CSCL community, Proceedings of CSCL
2002, Boulder, Colorado (pp. 245–254). USA.
Zhang, K., & Bonk, C.J. (2008). Addressing diverse learner preferences and
intelligences with emerging technologies: Matching models to online opportunities.
Canadian Journal of Learning and Technology, 34(2), 309-332.
Zhang, K., & Ge, X. (2006). Dynamic contexts of online collaborative learning. In
A.D. de Figueiredo & A.P. Afonso (Eds.), Managing Learning in Virtual Settings:
The Role of the Context (pp. 97-115). Calgary, AB, Canada: Idea Group, Inc.



×