Tải bản đầy đủ (.pptx) (28 trang)

Design of the evaluation

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (820.73 KB, 28 trang )

George C. Homas (1949): People who write about methodology often forget that it is a matter of
strategy, not of morals. There are neither good nor bad methods but only the methods that are:

More
effective




Design of
the
evaluation
includes
here:



in
s
ce
n
a
The design
st s
al
o
m
u i ve t g
c
r
Designing the process


i and
n
ct taoutcome
c
e
j
r
la ob dis
evaluation
u
tic ing to a
r
a ch y
p
Important concepts:
r rea wa validity and
e
d
e
unit of analysis
th
Un
On

• Design informal: self- evaluation,
expert judgment design

Less

• Design formal: one group and

extended effective
onePresenters:
group design ++
R.M. Kaphle
Modestus Uzokwe


The design: indicates
• Which people or unit will be studied?
• How they are to be selected?
• Which kind of comparison will be drawn?
• What is the timing of the investigation?
So key in design are:
Status
Method
Comparison
Change/Longitudinal


Process Evaluation:
Is program is being conducted & producing output as planned?
How can process can be improved?

• In fact not very different from monitoring- similar kind of inquiry
on both
But some differences:
• Often conducted for the benefit of the program
• Help to understand the program: what it has been doing and
how?
• Lead to reflect on how it might improves its operations?

• More systematic and relay more on data and less on intuitive
judgments


A program evaluation has to be designed- whether it
is Q or Q method of investigation
First: evaluator has to decide
-which sites to study (large or small, few or several(class room with in the
school) and which people to query
- which sample strategy to use
- Time period: which data, what interval, how intensively
In qualitative(Yin): make design an integral feature
So, major advantage in process evaluation= “opportunity to find the
unexpected”
- The evaluator can follow the trail wherever it leads
- Also learns which aspects of the program matter to staff and to participants
- What elements of the program are relevant to whom at which time?


When should an evaluator conduct a process evaluation?
• When she knows little about the nature of the program &
its activities
• When the program represents a marked departure from
ordinary method of service?
• When the theories underlying the program – implicitly or
explicitly are doubtful, disputable or problematic.
• …in such cases a wider gauge inquiry is worthwhile.
However, again there is the option of quantitative measures of program process: design
forms, interviews, survey
So, whichever method or combination of methods is used for process evaluation the

evaluator has to decide the frequency with which information will be collected.


Key methodological components to consider in process evaluation

Methodologi
cal
component

General definition

Example – qualitative and quantitative methods

Design

Timing of data collection: when and
how often data will be collected

Observe classroom activities at least twice per
semester with at least 2 weeks of observation
Conduct focus groups with participants in the last
month of the intervention

Data sources

Source of information (for example,
who will be surveyed, observed,
interviewed)

Both qualitative and quantitative – data sources

include participants, teachers/staff delivering
sessions records, the environment etc

Data
collection
tools/
measures

Instruments, tools and guides used
for gathering process-evaluation
data

Both qualitative and quantitative – tools include
surveys, checklists, observations forms, interview
guides etc

Data
collection
procedures

Protocols for how the data
collection tool will be administered

Detailed description of how to do quantitative/
qualitative classroom observation, face-to-face or
phone interview, mailed survey, focus group etc

Data
managemen
t


Procedures for getting data from
field and entered – plus quality
checks

Staff turn in participant sheets weekly, evaluation
coordinator collects and checks surveys and gives
them to data entry staff
Interviews transcribed and tapes submitted at the
end of the month

Data
analysis

Statistical and/or qualitative
methods used to analyse or
summarise data

Statistical analysis and software that will be used to
analyse the quantitative data
Types of qualitative analysis used


Outcome evaluation: to what extent have a program’s goals have
been met?

2 fold: Before –program- after
Diagram of an experiment
Program
Random

participants
assignment Control group

Before After

a

b

c

d

b-a=y
d-c=z

y-z= net
outcome of the
program
If +, positive
net outcome

b-a= out comes for participants= y
d-c= out comes for the control group=z
y-z= the net outcome of the program. More on chapter- 9


Selection
Attrition
Outside effects

Maturation
Testing
Instrumentatio
n
If there is no reason to suspect that selection or attribution or
outside events or any other threats are getting in the way, then
experimental design is a luxury.
Evaluator is doing an evaluation to inform the policy community
about the program & to help decision makers make wise
decisions. Her task is to provide best information possible.
What the evaluator needs to be concerned about is: countering
credible threats to valid conclusions
In the absence of experimental design,
conditions other than the program can be
responsible for observed outcomes chief
among them are:









Important concepts
• Validity: the extent to which the indicator actually captured the concept
that it aimed to measure
• Valid findings describe the way things actually are.
• Internal: the causal link between independent(program inputs: describes

the participants or features of the service they receive) and
dependent( observed out comes of the program) variables
• External/ generalizability: whether the findings of one evaluation can be
generalized to apply to other programs of similar type


Unit of analysis and Unit of sampling
UOA: the unit that the evaluation measures and
enters in to statistical analysis or it is the unit
that figures in data analysis.
The unit that receives a program is usually a
person but it can be: a group, an organization, or
a community or any other definable and
observable unit
Researchers used to worry about the appropriate
UOA e.g. analysis of higher level of department
of health not possible to rich sound conclusion
about individuals with in those departments.=
Ecological fallacy

UOS: is the entity i.e. selected in to
the program e.g. “ study of a
nutrition education program carried
out in supermarkets”
In program conducted in school
classroom are usually the unit
sampled. There may be 2 stages of
sampling 1st schools can be sampled
followed by choice of classrooms
within the selected school.


However, techniques of multi-level analysis make it possible to analyze data at several levels
simultaneously. A matter that requires continuing care is the unit of measurement.


Designs: 2 key questions need to keep in mind
a) what comparisons are being made and will these comparisons provide sound conclusions?
b) will the findings from a study like this be persuasive to potential audiences?

Informal
Self evaluation
Expert judgment

Formal
Formal
1.One group design
A) after only
b) before and after
2. Extending one group design
a) more data during the program
b) DRB- Dose- Response- Design
c) Time Series Design
3) Comparison groups…


1.Self evaluation
• Ask the people who are involved with it and what
they think:
For instant: Staffs: d2d, Administrators: +&- ,Clients:
services, Outsiders: press, community, other org like or

dislike about the program.
• Shortcomings; if people know it is for evaluation:
most favorable reporting.
• Despite the weaknesses, these kind of judgments
have value.


2.Expert Judgment
Inspect, review,
Who
evaluation the
are?
program….
Knowledgeable outsider
Expert s/
program
Team of individuals/
Panels
What
they
do??

Arbitrator judgment
weakness dataChances to reflect.. ideological,
historical, cultural preferences
and conflicting school of thoughts

Where the
experts are

better at J. ?

Which phase and where?

From recruitment practices
Or financial accounting
To the outcomes for
beneficiaries

Any phase of program
hs
t
g
n
stre

In the procedures and practices of
program

• Team and panel
• No single person
can exercise her
unique standard
• Decision have to
survive..
• Wider range of
experience and
skill



Formal: i)One group design
• Does not include any comparison with units
• 2 categories of evaluation: after only(AO) and before and after(BA)
• A) After only: the evaluator finds out what the situation is after the program
by examine records, if unavailable the evaluator can use historical comparisons
E.g. compare test result at the end semester(3rd) with history of test result(1st and 2nd)
This adds useful information but there are drawbacks
Changing situation of school, diff. population, test system, teachers…
Question of validity of data, reliability of peoples memory ,= real difficulty inducing a
reliable baseline with out records
BUT certain matter of facts such as age, school complete time, whether they were
un/employed, what kind of work they are doing ..such responses are fairly trustworthy.
Another way through the “ use of experienced judgment”


B)Before and after: Look at program recipients before they enter
the program and then again in the end
Cohort 1
Cohort 2
Cohort 3
Cohort 4

Before After
Before

After
Before

One
group


After
Before

After

Result: was the change in their skills or health or income due to
the program? May be or may not be ???
But if the data collected with care and systematically, they offer
considerable information about the program and the people who
participate in it. Then the evaluation will be able to say much
about how the program is working.


Advantages and disadvantages of one group design

Advantaged
They provide a
preliminary look
at the
effectiveness of
a program

Disadvantages
Program effects may be underestimated if outside
events operates to counter act program efforts
Evaluators mind eye
Higher expectations
Can not provide quick result
Some agencies demand one-time ex-post facto

investigation: short term needs quick results
Evaluators will have to exploit every opportunity…


ii)Extending one group design: one group design can be

directions

elaborated in 2 basic

During the program
Collecting
more data
on what
happens ?

That is

Much before &
after the program

MDDP: More data during the
program
DRD: Dose-Response- Design


MDDP: More Data During the Program

DRD: Dose Response Design


• One way: During measure, or series of 3D
“During-During- During” measure
• Q&Q data can be collected in program
services and on participant’s progress as
they move through the program.
• The data can be analyzed to elaborate the
picture of what happens during the program
• Can identify the association between
program events and participant outcomes
• It can probe in to the relationships, among
rules, recruitments, strategies, mode of
organization, auspices, services, leadership,
communication, feedback loops… &
whatever else people are concerned about.

• Evaluation can compared participants who
received a large quantity of service ( a high
doge) with those who receive a lower doge
• The notion that more is better may not always
be right
• In social programs, evaluator can make many
internal comparisons.
• It is a highly versatile design
• It can be used not only in one group designs but
also when the evaluation includes comparison
or control group…in ch- 9


The assumed link
between tenant

participation in the
mgmt of a public
housing project and
resident security

Program Theory Model: MDDP

Greater demand
for security staff

More security
staff hired

More patrol
surveillance
and arrests

Tenant/resident
representatives

P.Theory
hypothesizes

Greater sense
of individual
responsibility
for the project

More
neighborliness,

social networks

More people’s
eyes on the street

Tenant watch,
Fear of
troublemaker

Link is not
obvious

Culture of mutual
concern

Evaluator then
designs
methods for
collecting data
Assumptions
actually comes
to pass,
She locates
sources, time
intervals

Inhabitation of violence

Improved security



We are discussing a variety of research design for evaluating
the process and outcome of social programs.
Evaluator will want to use designs that are less vulnerable to
bias and incompleteness…


TIME-SERIES DESIGNS

Presented by: MODESTUS


TIME-SERIES DESIGNS
Time-series data enable the evaluator to interpret the pre-to-post
changes in the light of additional evidence. They show whether the
measures immediately before and after the program or whether they
mark a decisive change.


This type of evaluation has the ability to follow participants for more
than a year after their departure from the program; these can be
apply on crime, welfare caseloads, hospital stays, medical costs,
births, refugee resettlement etc.
The main criticism is that some outside events coincided with the
program and were responsible for whatever change was observed
between before and after. For example, the program that accounted
for the decline in smoking; it could have been the television series
on the risks of smoking that came along at the same time.
Another way to cope with the possibility of contamination with
outside events is by studying what happened over the same time

period to people who did not receive the program but were exposed
to the same external influences.


Comparison Group
When these groups are not selected randomly from the same
population as program recipients, we call them comparison groups.
The main purpose for the comparison is to see whether receiving the
program adds something extra that comparison units do not get.
these shows what program recipients would have been like if they
had not been in the program.
Before-and-After with Comparison Group
The comparison group is selected to be as much like the client group
through any of a variety of procedures. The evaluator can locate a
similar site where the program do not take place, compare the status
of both group before the program began & test how similar they
were.
However, each will compensate for differences that the other
comparison leaves uncontrolled


Multiple Time Series
These are especially useful for evaluation policy changes, when
jurisdiction passes a law; Example evaluator of the crackdown on
highway speeding. Evaluator collected reports of traffic fatalities for
several periods before & after the new program went into effect.
They found that fatalities went down after police began strict
enforcement of penalties for speeding.
Constructing a Comparison Group
Effect can be assumed

(Program – effect)


Tài liệu bạn tìm kiếm đã sẵn sàng tải về

Tải bản đầy đủ ngay
×