Tải bản đầy đủ (.doc) (44 trang)

DESIGNING AN END OF YEAR ENGLISH OBJECTIVE TEST FOR 1ST YEAR NON MAJOR ENGLISH STUDENTS OF THE ACADEMY OF FINANCE

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (241.64 KB, 44 trang )

1
----------------------------------------------------------------------------------------------

INTRODUCTION

I. RATIONALE OF THE STUDY
Nowadays, mastering a foreign language, especially English, plays an important
role in our social life. It is obvious that mastering English helps the country not only have
more contacts with more nations in the world but also enrich its people’s knowledge.
There are more and more people in Vietnam who want to learn English with
different purposes. Therefore, English has been introduced as a compulsory subject in
almost every school and college in Vietnam. Like any other subjects, it needs some kinds
of evaluation to measure students’ performance and ability, in which accuracy, objectivity
and logicality are required. Nevertheless, almost all teachers lack experience, and are often
not formally trained in test designing.
Academy of Finance is a university in which there are a number of non-major
English students. In other words, the students lack background knowledge of English.
They only have chances to learn 150 periods of English for Business Basics (EBB) during
their first two terms to prepare for their 120 periods of English for Specific Purposes (ESP)
in the fourth and the fifth term. In fact, many students admit that English is quite
demanding for them, they could not learn it well, even General English. In spite of the fact
that they were very hard working, many students failed or got bad results after each final
examination. One of the most important causes for the above situation is the matter of
testing. Quite often, the present tests show a gap between the things students learn in the
course and the tasks they have to do in the tests, i.e. the tests contain items that are
unfamiliar or too difficult to students.
For many years, most English examinations for school or university students are in
written form. They include questions or different kinds of exercises, and the students’ task
is to write their own answers. This form of testing has a good point that the result of the
test will reflect the students’ real ability. However, copying others’ answers during the test



2
---------------------------------------------------------------------------------------------is unavoidable. Moreover, it is really hard to cover all the knowledge that students have
learnt in a written-form test.
Hughes (1989:1) has said, “It cannot be denied that a great deal of language
testing is of very poor quality. Too often language tests have a harmful effect on teaching
and learning; and too often they fail to measure accurately whatever it is they are intended
to measure.”
From the above reasons, it is, to my mind, an urgent need to prioritize in proposing
a different kind of test for the non-major English students. Therefore, in my research,
designing an end-of-year English objective test for 1st year non-major English students of
the Academy of Finance really satisfies the need.
II. AIMS OF THE STUDY
It has been easy to assume that testing is an important tool in educational research
and for program evaluation. That is why the minor thesis is aimed to design an end-of-year
English objective test for 1st year non-major English students of the Academy of Finance.
The test was considered as a final examination. Then the results of the test will be
analyzed, evaluated, and interpreted.
The specific aims of the research are:
-

To present the background of language testing and objective testing.

-

And then, to point out some qualities of a good test.

-

More importantly, to suggest appropriate test items in testing English of Business

Basics.

-

Also, to assess the learners’ achievement in acquiring English for Business Basics
after 150 periods.

-

Last but not least, to see whether or not the test satisfies the qualities of a good test.
From then on the test will measure the effectiveness of the teacher’s teaching. If the
test is not a good one and her teaching is not appropriate, some suggestions will be
made for a better test form.
It is expected that this investigation into objective testing might be helpful to

teachers of English in developing more relevant strategies for testing Business English
both now and in the future.


3
---------------------------------------------------------------------------------------------III. METHODS OF THE STUDY
In order to perform this study, I firstly choose the method of analyzing,
summarizing, and synthesizing materials and books to form the theoretical background.
More importantly, I design an end-of-year English objective test for 1 st year non-major
English students, administer it, and then evaluate it, so the method adopted is quantitative.
Besides, I also make use of information from informal discussions with my colleagues and
students.
In the process of writing this paper, I also base myself on the knowledge I have
learnt from my teachers, my supervisor and from some references.
Last but not least, my own experience gained at the Academy of Finance, and my

experience in learning Language Testing during my study at the College of Foreign
Languages, VNU, also provide me favorable conditions to the completion of the study.
IV. SCOPE OF THE STUDY
This paper is intended, as the title suggests: “Designing an end-of-year English
objective test for 1st year non-major English students of the Academy of Finance”, to
touch upon some following issues:
-

The background of language testing as well as objective testing.

-

The design and evaluation of an achievement objective test for 1st year nonmajor English students of the Academy of Finance.

-

The practical recommendations for the appropriate final test to meet the
objectives of the course and the need of teachers and students.

V. ORGANIZATION OF THE STUDY
The study is divided into three parts:
The first part is the introduction dealing with the rationale, aims, methods, scope,
organization, and significance of the study.
The second part is the main part of the paper with three chapters:
Chapter I is the review of literature. This chapter gives a general overview of
language testing, achievement test, qualities of a good test, and objective testing.


4
---------------------------------------------------------------------------------------------Chapter II refers to research methodologies including the methods adopted in doing

the research, the selection of participants, the materials, the methods of data collection and
data analysis.
Chapter III is the discussion, which is the main part of the study. This chapter
reviews how an end-of-year English objective test for 1 st year non-major English students
of the Academy of Finance was designed, administered, and evaluated.
The last part is the conclusion that summarizes the content of the paper and gives some
suggestions for further study.
VI. SIGNIFICANCE
It is hoped that this study, to some extent, will be a useful reference material for
teachers of English at universities to get more ideas and techniques in their test designing
to make the test valid and reliable.


5
----------------------------------------------------------------------------------------------

DEVELOPMENT
CHAPTER I: LITERATURE REVIEW

This chapter will provide an overview of the theoretical background of the
research. it is composed of four small sections. Section I.1 brings a significant insight into
the concept of language testing. The introduction of achievement tests will be discussed in
section I.2 which is followed by section I.3 with the investigation into major characteristics
of a good test. The final area to be mentioned is a brief review of objective testing which is
presented in section I.4.
I.1. LANGUAGE TESTING
In fact, many researchers and practitioners in EFL and ESL testing have paid much
attention to the issue of language testing. Thus, different authors in different periods have
different definitions of testing. Moreover, although most researchers argue about what to
test, why to test and how to test, not many of the definitions can cover all aspects of

testing. That shows the reason why the concept of language testing is a rather complex one.
According to Allen and Corden (1974:313), “a test is a measuring device which we
use when we want to compare an individual with other individuals who belong to the same
group”. Following this opinion, a test is limited only to a tool which helps sort out one
student from others, regardless the real abilities being tested, the interaction between the
test and the testees, etc. Meanwhile, Heaton (1998:5) holds a profound definition that tests
are designed firstly as “devices to reinforce learning & to motivate the students” or “as a
means of assessing the students’ performance”.
Besides, a test can be seen as an instrument for measuring a sample of behavior
(Gronlund, 1985:5). Similarly, Bachman (1995:18) when distinguishing measurement and
evaluation from test agreed that we design a test “to elicit a specific sample of an
individual’s behavior”. With this conception, tests focus on measuring a given aspect of
language ability on the basis of a sample of language use. However, these definitions seem


6
---------------------------------------------------------------------------------------------to be wider in the sense that language tests provide the means for a more careful focus on
the specific language abilities of interest.
Generally, the definition of testing is always given in connection with different
kinds of test. Following is a classification of test given by Hughes (1989:9):
-

Proficiency tests

-

Achievement tests
> Class progress tests
> Final achievement tests


-

Diagnostic tests

-

Placement tests

-

Aptitude or Prognostic tests

-

Direct tests versus indirect tests

-

Discrete – point tests versus integrative tests

-

Norm – referenced tests versus criterion – referenced tests

-

Objective tests versus subjective tests

-


Communicative tests

As for some authors, language tests play many important roles in life. McNamara
(2000:4) views that language tests, first, act as “gateways at important transitional
moments in education, in employment, and in moving from one country to another”.
Secondly, language tests can be worked with in “professional life as a teacher or
administrator, teaching to a test, administering tests, or relying on information from tests
to make decisions on the placement of students on particular courses”. Last but not least,
any researcher who needs to have measures of the language proficiency of the subjects
cannot do it without choosing an existing language test or designing his/her own one.
In conclusion, although there are different concepts of language testing, and there
have been many various terms presenting connotations of language testing that are still on
discussion, it is widely accepted that a test is a means of measuring students’ language
abilities motivate and also serve as devices to students in the learning process. Moreover,
it can enable testers to check and improve what they or their students lack. The following


7
---------------------------------------------------------------------------------------------section will be presenting some background of achievement tests, a most common test
type used in school.
I.2. ACHIEVEMENT TESTS
I.2.1. Definition
Achievement tests (also called attainment or summative tests) are defined
differently among different researchers.
According to McNamara (2000:6), “achievement tests accumulate evidence during,
or at the end of, a course of study in order to see whether and where progress has been
made in terms of the goals of learning”. That means an achievement test relates to the past
in that they measure what language the students have learned as a result of teaching.
In the same view, Hughes (1989:10) defines achievement tests as the ones that are
“directly related to language courses, their purpose being to establish how successful

individual students, groups of students, or the courses themselves have been in
achievement objectives”.
Meanwhile, as for Heaton (1997:14), achievement tests measure students’ mastery
of what should have been taught but not necessarily what has actually been taught. In other
words, these are based on what the students are presumed to have learnt. Unlike progress
test, achievement test should attempt to cover as much of the syllabus as possible. If we
confine our test to only part of the syllabus, the content of the test will not reflect all the
things students have learnt.
Discussing the concept of achievement tests, Harrison (1991:64) makes clear
distinction between an achievement test and a diagnostic test. He views that “designing
and setting an achievement test is a bigger and more formal operation than the equivalent
work for a diagnostic test, because the student’s result is treated as a qualification which
has a particular value in relation to the results of other students. An achievement test
involves more detailed preparation and covers a wider range of material, of which only a
sample can be assessed”.


8
---------------------------------------------------------------------------------------------To summarize, it is obvious that achievement tests play an important role in
evaluating the student’s language proficiency during the course, thus it is necessary for
teachers-test designers to take this kind of tests into consideration.
I.2.2. Kinds of achievement tests
Achievement test can be subdivided into progress achievement tests and final
achievement tests.
I.2.2.1. Progress achievement tests
“Progress achievement tests, as their name suggests, are intended to measure the
progress that students are making”. (Hughes, 1989:12). This type of test, therefore, is
always administered during the course to help teachers identify the weakness of the
learners, and diagnose the areas which are not properly achieved during the teachinglearning process. This enables the teacher to signpost the achievement of the course
objectives. In addition, this kind of test may lead to the teacher’s decision to change

something, not just the test itself, but also their teaching method. Finally, the progress
achievement test is a teaching device and can be considered as a good chance for the
students to prepare for the final achievement test.
I.2.2.2. Final achievement tests
“Final achievement tests are those administered at the end of a course”. (Hughes,
1989:10). They are not written and administered by the teacher himself, but maybe by
ministry of education boards of examiners, or by members of teaching institutions. The
final achievement test is often based on an adopted syllabus and its approach, either
“syllabus- content approach” or “course- objective approach”.
In the view of the former approach, the content of a final achievement test should
be based directly on a detailed course syllabus and other materials used. It has an obvious
appeal, since the test only contains what it is thought that the students have actually
encountered, and thus can be considered a fair test. However, it also has bad points when
the syllabus is badly designed, or the materials are badly chosen.


9
---------------------------------------------------------------------------------------------Besides, if the test is based on the latter, its contents are based directly on the
objectives of the course. Hence, the testers must be clear about objectives and make the
test possible to show how far the students have achieved these objectives. To show his
belief that course-objective-based tests are much to be preferred, Hughes asserts that “it
will provide more accurate information about individual and group achievement, and it is
likely to promote a more beneficial back wash effect on teaching.” (1989:11)
In short, achievement tests are crucial obviously in school teaching and learning.
Therefore, whenever one wishes to design a language test, he needs to consider not only
those kinds of achievement tests but also the qualities of a good test.
I.3. QUALITIES OF A GOOD LANGUAGE TEST
In fact, there is no prefect test because a test that proves ideal of one purpose may
be quite useless for another. Apart from that, it cannot be denied that a large number of
language tests are of very poor quality. Thus, it is much more fruitful to analyze the

characteristics of a good language test and apply the knowledge we have gained from this
analysis to the process of designing a test. Raising the question: “What makes a good
language test?”, different researchers have pointed out different qualities. This part, then,
will mention the four most important ones: reliability, validity, practically, and
discrimination.
I.3.1 Reliability
“Reliability is a necessary characteristic of any good test” (Heaton, 1998:162).
According to Hughes (1989:36), reliability refers to the consistent results of a test which is
administered to the same testers on different occasions. Bachman & Palmer (1996:19) also
states, “Reliability is often defined as consistency of measurement”. Hence, reliability can
be considered to be a function of the consistency of scores from one set of tests and test
tasks to another. That is to say, “tests should not be elastic in their measurements”,
(Harrison, 1991:10). It is important that the test scores should be the same, or as nearly as
possible the same, whether the testee takes one version of a test or another, and whether
one person marks the test or another. Sharing the same opinion, Alderson et al (2001:128-


10
---------------------------------------------------------------------------------------------147) argues that the more similar the scores would have been, the more reliable the test is
said to be. In other words, the stability of the test result ensures its reliability.
Harrison (1991:11) also views in his book three aspects to reliability. They are: the
circumstances in which the test is taken, the way in which it is marked, and the uniformity
of the assessment it makes.
Obviously, reliability is apparently an essential quality of test values though not
easy to be attained. According to Hughes (1989:36) there are two components of test
reliability: the performance of candidates from occasion to occasion, and the reliability of
the scoring. Therefore, he suggests several ways of achieving consistent performances
from candidates and scorer reliability:
-


Take enough samples of behavior.

-

Do not allow candidates too much freedom.

-

Write unambiguous items.

-

Provide clear and explicit instructions.

-

Ensure that tests are well laid out and perfectly legible.

-

Make sure candidates are familiar with format and testing techniques.

-

Provide uniform and non-distracting conditions of administration.

-

Use items that permit scoring which is as objective as possible.


-

Make comparisons between candidates as direct as possible.

-

Provide a detailed scoring key.

-

Train scorers.

-

Agree on acceptable responses and appropriate scores at outset of scoring.

-

Identify candidates by number, not name.

-

Employ multiple, independent scoring.
(Hughes, 1989:36)

To sum up, it should be noted that reliability is clearly inadequate by itself if a test
fails to measure what it is supposed to measure. Furthermore, in order to be reliable, a test
must be consistent in its measurements.



11
---------------------------------------------------------------------------------------------I.3.2. Validity
Validity is the second quality that affects test usefulness. Although the concept of
test validity differs among researchers, most of them agree that a test is said to be valid if it
measures accurately what it is supposed to measure. (Alderson et al, 2001:170; Bachman,
1995:236-238; Harrison, 1991:11 and Heaton, 1998:159).
Henning (1987:89) defines validity as follows:
“Validity in general refers to the appropriateness of a given test or any of its
component parts as a measure of what it is purported to measure. A test is said to be valid
to the extent that it measures what it is supposed to measure. It follows that the term valid
when used to describe a test should usually be accompanied by the preposition for. Any
test then may be valid for some purposes, but not for others. ”
Every test, whether it is a short, informal classroom test or a public examination,
should be as valid as the constructor can make it. The test must aim to provide a true
measure of the particular skill which it is intended to measure: to the extent that it
measures ability to communicate in a foreign language and grammar of this language at the
same time, it will not be considered valid.
It is certain that validity concerns the extent to which meaningful inferences can be
drawn from test scores. Sharing Henning ‘s opinion, Gronlund (1985:57) also mentions,
“validity refers to the appropriateness of the interpretation of the results of a test. ”
Therefore, in order to examine the validity of a test, it requires a validation process by
which a test user presents evidence to support the inferences or decisions made on the basis
of test scores. That is to say, there is a close relationship between validity and reliability. In
order to be valid, it is necessary that the test should be as reliable as possible.
Over recent years even if most testers have used different names and definitions, it
has been traditional to classify validity into different types, such as face, content, criterionrelated, and construct. (Alderson et al, 2001:171-186; Hughes, 1989:22-27; Harrison,
1991:11; Heaton, 1998:159-162).


12

---------------------------------------------------------------------------------------------I.3.2.1 Face validity
“A test is said to have face validity if it looks as if it measures what it is supposed
to measure”, (Hughes, 1989:27). Heaton (1998:159) also views that if a test item looks
right to other testers, teachers, moderators and testees, it can be described as having at least
face validity. Face validity is hardly a scientific concept, yet it is very important. A test
which does not have face validity may not be accepted by candidates, teachers, education
authorities or employers. Hence, the only way to find out about face validity is to ask the
testers and testees concerned for their opinions, either formally by means of a
questionnaire or informally by discussion.
I.3.2.2 Content validity
This kind of validity depends on a careful analysis of the language being tested and
of the particular course objectives. Also, Harrison (1991:11) assumes that “content validity
is concerned with what goes into the test. The content of a test should be decided by
considering the purposes of the assessment and then drawn up as a list known as a content
specification”. More clearly, the test should be constructed in such a way that it can
contain a representative sample of the course, the relationship between the test items and
the course objectives always being apparent. Then, in order to judge whether or not a test
has content validity, we need a specification of the skills or any related aspects that it is
meant to cover. Such a specification should be made at a very early stage in test
construction.
I.3.2.3. Criterion-related validity
The third type of validity is usually referred to as criterion-related validity. It is also
called empirical or statistical validity (Heaton, 1998:161). This validity is obtained as a
result of comparing the results of the test with the results of some criterion measure such
as:
-

an existing test, known or believed to be valid and given at the same time; or

-


the teacher’s rating or any other such form of independent assessment given at
the same time; or

-

the subsequent performance of the testees on a certain task measured by some
valid test; or


13
----------------------------------------------------------------------------------------------

the teacher’s ratings or any other such form of independent assessment given
later
(Heaton, 1998:161)

Results obtained by either of the first two methods above are measures of the test’s
concurrent validity in respect of the particular criterion used. The third and fourth methods
estimate the predictive validity of a test which is used to predict future performance.
I.3.2.4. Construct validity
A test is said to have construct validity if it can be demonstrated that it measures
just the ability which it is intended to measure. (Hughes, 1989:26). This type of validity
assumes the existence of certain learning theories or constructs underlying the acquisition
of abilities and skills.
To conclude, validity is, as for many researchers, the most important consideration
in test development. It is, therefore, vital to note that the primary concern in test
development and use is demonstrating not only that test scores are reliable, but that the
interpretations and uses we make of test scores are valid.
I.3.3. Practicality

Another quality of a good test which should not be forgotten is its practicality.
According to Heaton (1998:167), “a test must be practicable: in other words, it must be
fairly straight-forward to administer.”
Theoretically, practicality is defined as the relationship between the resources
(human resources, material resources, and time) that will be required in the design,
development, and use of the test, and the resources that will be available for these
activities. (Bachman & Palmer, 1996:36). This relationship can be represented as in figure
below:
Available resources
Practicality =
Required resources
When practicality >= 1, the test development and use is practical.
When practicality < 1, the test development and use is not practical.


14
---------------------------------------------------------------------------------------------Finally, when designing a test the tester should always bare in mind this quality to
ensure that the test is as economical as possible, both in time and in cost.
I.3.4. Discrimination
When discussing the basic concepts behind testing, it would be incomplete without
the treatment of the closely related idea of discrimination. Heaton (1998:168) also asserts
that an important feature of a test is, sometimes, its capacity to discriminate among the
different candidates and to reflect the differences in the performances of the individuals in
the group.
However, a too-easy or too-difficult test proves little discrimination, because most
testees will perform either very well or badly, and their scores will cluster around a central
point. The items in a test, thus, should be spread over wide individuals in a group of
testees.
In summary, it is essential for testers to remember that the purpose of the test is
crucial in deciding the degree of discrimination; the extent of the need to discriminate will

vary in different types of tests.
I.4. OBJECTIVE TESTING
I.4.1. Subjective and objective testing
“Subjective and objective are terms used to refer to the scoring of tests.” (Heaton,
1998:25). An objective test can be marked very quickly and completely reliably. Because
an objective test has only one correct answer (or, at least, a limited number of correct
answers), this type of test can be scored by a machine or by an in experienced person.
Therefore, the fact that objective tests can be marked by computer is one important reason
for their evident popularity among examining bodies responsible for testing large numbers
of candidates. Meanwhile, in a subjective test, candidates must think of what to say and
then express their ideas as well as possible. Since subjective tests allow for much greater
freedom and flexibility in the answers they require, they can only be marked by a
competent marker or teacher.


15
---------------------------------------------------------------------------------------------On the whole, objective tests require more careful preparation than subjective tests.
Examiners tend to spend a relatively short time on setting the questions but considerable
time on marking. In an objective test the tester spends a great deal of time constructing
each test item as carefully as possible, attempting to anticipate the various reactions of the
testees at each stage. The effort is rewarded, however, in the ease of the marking.
I.4.2 Objective tests
According to Ryan (2007), the objective test is only one of many ways in which
students can be evaluated. Tests can be formal or informal, oral or written; and no one
form of testing is necessarily better or worse than another. Objective tests, however, do
offer some advantages over other forms of testing. By definition these testing procedures
are more objective than other procedures. That is, they are less dependent on personal
opinion than some other forms of testing. Objective tests also tend to be more reliable than
other types of testing; and the objective format allows instructors to test a large number of
students on a wide range of topics in a relatively brief period of time. In addition to that,

objective tests can be pre-tested before being administered on a wider basis: i.e. they are
given to a small but truly representative sample of the test population and then each item is
evaluated in the light of the testees’ performance.
Obviously, objective tests require a user to choose or provide a response to a
question whose correct answer is predetermined. Such a question might require a student
to:
-

select a solution from a set of choices; or

-

identify an object or position, or

-

supply brief numeric or text responses

It should be remembered, of course, that an objective test will be a very poor test if:
-

the test items are poorly written;

-

irrelevant areas and skills are emphasized in the test simply because they are
“testable”; and

-


it is confined to language-based usage and neglects the communicative skills
involve.
(Heaton, 1998:27)


16
---------------------------------------------------------------------------------------------I.4.3. Types of objective tests
Objective tests can contain a number of question types. However, the most
common objective test questions are multiple-choice, true-false, and matching items,
(McKenna, C. and Bull, J., 2007).
I.4.3.1. Multiple-choice items
A traditional multiple choice question (or item) is one in which a student chooses
one answer from a number of choices supplied. Probably the most commonly used
objective question, the multiple choice question, consists of:
-

A stem – the text of the question

-

Options – the choices provided after the stem

-

The key – the correct answer in the list of options

-

Distracters – the incorrect answers in the list of options


An example of a multiple-choice item:
Put a circle round the letter at the side of the correct option:
He may not come, but we’ll get ready in case he..................
A. will

B. does

C. is

D. may

(Correct answer: B)

(Heaton, 1998:33)
I.4.3.2. True-false questions
A true-false test item is a specialized form of the multiple-choice format in which
there are only two possible alternatives. These questions are written in the form of a
declarative sentence, and can be used when the test-designer wishes to measure a student’s
ability to identify whether statements of fact are accurate or not. Hence, the student must
judge whether the sentence is a true or a false statement.
An example of a true-false question:
T/ F

An accountant writes contracts, advises companies on the law.

(Correct answer: F; from Grant & McLarty, 1995, Business Basics, Great Britain: OUP)
I.4.3.3. Matching questions


17

---------------------------------------------------------------------------------------------The matching item is a modification of the multiple choice question. In a matching
test item, a list of words or phrases is presented in a column, generally on the left side of
the page. These words or phrases are called the premises of the item. A second column,
generally on the right side of the page, contains words or phrases called responses that are
to be matched with the premises.
An example of a matching question:
Look at this list of jobs. Match the jobs (1-10) with the definitions (a-j).
1. pilot

a. helps people to learn

2. accountant

b. flies planes.

3. research scientist

c.

assists,

word-processes,

makes

appointment
4. secretary

d. check financial results.


5. teacher

e. usually works in a lab.

(Correct answers: 1-b, 2-d, 3-e, 4-c, 5-a; from Grant & McLarty, 1995, Business Basics,
Great Britain: OUP)
I.4.3.4. Other question types
 Assertion-Reason questions: combine elements of multiple choice and true /
false question types and allow you to test more complicated issues and requires a higher
level of learning. The question consists of two statements, an assertion and a reason.
 Multiple response questions: are a variation of multiple choice in which the
student is allowed to choose more than one choice from the list.
 Graphical hotspot questions: involve selecting an area of the screen, by moving
a marker to the required position. Advanced types of hotspot questions include labeling
and building questions.
 Text / Numerical questions: involve the input of text or numbers at the
keyboard.
 Sore finger questions: have been used in language teaching and computer
programming, where one word, code or phrase is out of keeping with the test of a passage.
It could be presented as a “hot spot” or text input type of question.


18
--------------------------------------------------------------------------------------------- Ranking questions: require the student to relate items in a column to one another
and can be used to test the knowledge of sequences, order of events, level of gradation.
 Sequencing questions: require the student to position text or graphic objects in a
given sequence.
 Field simulation questions: offer simulations of real problems or exercises.
(McKenna, C. and Bull, J., 2007)
Summary

In this chapter, I have briefly presented the theory of testing. Based on this
theoretical review, I am going to study the designing of a real end-of-year English,
objective test for 1st year non-major English students in my university.


19
----------------------------------------------------------------------------------------------

CHAPTER II: RESEARCH METHODOLOGY

This chapter will deal with the research methodology. This includes a quantitative
study, the participants who took part in doing the test, and the materials from which the test
items were taken. The methods of data collection and data analysis are presented
afterwards. Finally come the limitations of the research.
II.1. QUANTITATIVE STUDY
Like qualitative research, quantitative research comes in many approaches
including descriptive, correlational, exploratory, quasi-experimental, and true-experimental
techniques.
I myself am a teacher of English in Academy of Finance. I designed this objective
test to understand better how things are really operating in my own university as well as to
find out whether the objective test items are the suitable ones in testing Business English
that can measure most the non-major English students’ ability. After 150 period course, 60
students were chosen randomly from six different classes (44/21.05, 44/21.07, 44/21.12,
44/21.14, 44/21.16, 44/21.18) to do an objective test in the time given (60 minutes) and
then the results collected from the testing papers would be described in different terms with
the use of the descriptive statistics technique. The correlational research technique was also
used to find out the reliability coefficient latter in the study.
II.2. THE INFORMANTS
The 1st year students at the Academy of Finance mainly come from different towns
and cities in the North of Vietnam. They are generally aged between 18 and 22.

At the university, they study for eight terms in four years. They are all non-major of
English. They usually have to learn a foreign language, in this case English, in only four
terms of their whole student life. In the first two terms, they study English for Business
Basics (EBB), (Grant and McLarty, 1995) and in the fourth and fifth term English for


20
---------------------------------------------------------------------------------------------Finance and Accounting (EFA), (Thieu, C.X. et al., 1999). After first two terms English
learning, they are required to be able to read and translate EBB and ready to learn EFA in
the next two terms. However, students often have varying English levels prior to the course
due to the fact that at secondary school they learned different languages, including
Russian, French and Chinese. It is therefore important for teachers to apply appropriate
methods in teaching them EBB to help them become more confident before starting to
learn EFA. It is also critical that teachers give them suitable test which meet their need and
the requirements of society at the same time.
II.3. MATERIALS
During the first two terms the students are required to learn all the 12 units in
Business Basics. These two terms include 150 periods in all, 75 periods for each term.
II.4. METHODS OF A DATA COLLECTION AND ANALYSIS
To collect data for the research, a 50 – item objective test of EBB was delivered to
60 students of the Accounting Department. These non-majors did the test within the time
frame given (60 minutes). Then the test papers were collected, and then were marked.
After that the results of the tests were analysed, and interpreted to find out the number of
students who did the test well, and those who performed badly. The results also indicate
the most frequent scores the testees got, the way these scored ranged, and the scores
deviated from the mean, etc.
II.5. LIMITATIONS OF THE RESEARCH
Like in any other studies, some shortcomings cannot be avoided in this one. First,
because of the limited time and ability, the author could design only one objective test to
be conducted on 60 students, which might not be a large number. Yet, it is hoped that the

results could be reliable and valid enough for the researcher to make inferences and come
to certain conclusions. Seconds, instead of designing different types of test, the author was
able to make solely one type, that is an achievement test to measure the progress her
students had made in term of EBB after undertaking the course of EBB in their last school
year 2006-2007. From the results, the author could also measure the effectiveness of her
teaching.



×