Title: Construct Validity and Reliability
1Construct ValidityandReliability
- Presented by
- Khalid Elmansor
2Construct Validity
3What is Construct Validity
Idea
Program
Construct
Operationalization
Construct Validity refers to the degree to which
inferences can legitimately be made from the
operationalizations in your study to the
theoretical constructs on which those
operationalizations were based.
4What is Construct Validity
5Construct Validity Types
Construct Validity
Translation Validity
Criterion-related Validity
Face Validity
Predictive Validity
Content Validity
Concurrent Validity
Convergent Validity
Discriminant Validity
6Construct Validity Types
Construct Validity
Translation Validity
Criterion-related Validity
Face Validity
Predictive Validity
Content Validity
Concurrent Validity
Convergent Validity
You look at the operationalization and see
whether on its face it seem like a good
translation of the construct.
Discriminant Validity
7Construct Validity Types
Construct Validity
Translation Validity
Criterion-related Validity
Face Validity
Predictive Validity
Content Validity
Concurrent Validity
Convergent Validity
You check the operationalization against the
relevant content domain for the construct.
Discriminant Validity
8Construct Validity Types
Construct Validity
Translation Validity
Criterion-related Validity
Face Validity
Predictive Validity
Content Validity
Concurrent Validity
Convergent Validity
You check the operationalizations ability to
predict something it should theoretically be able
to predict
Discriminant Validity
9Construct Validity Types
Construct Validity
Translation Validity
Criterion-related Validity
Face Validity
Predictive Validity
Content Validity
Concurrent Validity
Convergent Validity
You check the operationalizations ability to
distinguish between groups that it should
theoretically be able to distinguish between.
Discriminant Validity
10Construct Validity Types
Construct Validity
Translation Validity
Criterion-related Validity
Face Validity
Predictive Validity
Content Validity
Concurrent Validity
Convergent Validity
You examine the degree to which the
operationalization is similar to other
operationalizations to which it theoretically
should be similar
Discriminant Validity
11Construct Validity Types
Construct Validity
Translation Validity
Criterion-related Validity
Face Validity
Predictive Validity
Content Validity
Concurrent Validity
Convergent Validity
You examine the degree to which the
operationalization is not similar to other
operationalizations to which it theoretically
should not be similar to.
Discriminant Validity
12Threats to Construct Validity
- Inadequate Preoperational Explication of
Constructs - It means you did not do a good enough job of
defining what you mean by construct - Mono-Operation Bias
- It means if you only use a single version of a
program in a single place at a single point in
time, you may not be capturing the full picture
of the construct - Mono-Method Bias
- It means if you only use a single measure or a
single observation, you may only measure part of
the construct
13Threats to Construct Validity (cont)
- Interaction of Different Treatments
- Is the observed outcome a consequence of your
treatment or a combination of separate treatment? - Interaction of Testing and Treatment
- Does testing or measurement itself make the
observers more sensitive to the treatment? - Restricted Generalizability Across Constructs
- Refer to unintended consequences that threat
construct validity - Confounding Constructs and Levels of Constructs
- Occur when the observed results are only true for
a certain level of construct
14How to assess Construct Validity
- The Nomological Network
- The Multitrait-Multimethod Matrix (MTMM)
- Pattern Matching
15The Nomological Network
- It includes a theoretical framework for what you
are trying to measure, an empirical framework for
how you are going to measure it, and
specification of linkages among and between these
two frameworks - It does not provide a practical and usable
methodology for actually assessing construct
validity
16The Multitrait-Multimethod Matrix (MTMM)
- MTMM is a matrix of correlations arranged to
facilitate the assessment of construct validity - It is based on convergence and discriminant
validity - It assumes that you have several concepts and
several measurement methods, and you measure each
concept by each method - To determine the strength of the construct
validity - Reliability coefficients should be the highest in
the matrix - Coefficient in the validity diagonal should be
significantly different from zero and high enough - The same pattern of trait interrelationship
should be seen in all triangles.
17Pattern Matching
- It is an attempt to link two patterns
theoretical pattern and observed pattern - It requires that
- you specify your theory of the constructs
precisely! - you structure the theoretical and observed
patterns the same way so that you can directly
correlate them - Common Example ANOVA table
18Reliability
19Reliability
- In research, reliability means repeatability or
consistency - True score Theory is a simple model for
measurement. It assumes that any observation is
composed of a true value plus an error value X
T e - Measurement errors can be random or systematic
- To reduce measurement error
- If you collect data from people, make sure to
train them - Double-check the collected data thoroughly
- Use statistical procedures to adjust measurement
error - Use multiple measures of the same construct
20Theory of Reliability
- Reliability can be expressed as
- You cannot compute reliability because you cannot
calculate the variance of true scores! - The range of reliability is between 0 and 1
- The formula for estimating reliability
VAR(T)
VAR(X)
COV(X1, X2)
sd(X1) sd(X2)
21Types of Reliability Estimation
- Inter-rater or inter-observer reliability
- Is used to assess the degree to which different
raters/observers give consistent estimates of the
same phenomenon - Test-retest reliability
- Is used to assess the consistency of a measure
from one time to another - Parallel-forms reliability
- Is used to assess the consistency of the results
of two tests constructed in the same way from the
same content domain - Internal consistency reliability
- Is used to assess the consistency of results
across items within a test
22Reliability and Validity
23Thank you
24Extra slides
25Single correlation matrix
26Pattern Matching Example