Construct Validity and Reliability - PowerPoint PPT Presentation

Loading...

PPT – Construct Validity and Reliability PowerPoint presentation | free to view - id: 1dfefe-ZDc1Z



Loading


The Adobe Flash plugin is needed to view this content

Get the plugin now

View by Category
About This Presentation
Title:

Construct Validity and Reliability

Description:

CSC 426 Values and Computer Technology. Threats to Construct Validity ... CSC 426 Values and Computer Technology. Theory of Reliability. Reliability can be ... – PowerPoint PPT presentation

Number of Views:279
Avg rating:3.0/5.0
Slides: 27
Provided by: khalide9
Category:

less

Write a Comment
User Comments (0)
Transcript and Presenter's Notes

Title: Construct Validity and Reliability


1
Construct ValidityandReliability
  • Presented by
  • Khalid Elmansor

2
Construct Validity
3
What is Construct Validity
Idea
Program
Construct
Operationalization
Construct Validity refers to the degree to which
inferences can legitimately be made from the
operationalizations in your study to the
theoretical constructs on which those
operationalizations were based.
4
What is Construct Validity
5
Construct Validity Types
Construct Validity
Translation Validity
Criterion-related Validity
Face Validity
Predictive Validity
Content Validity
Concurrent Validity
Convergent Validity
Discriminant Validity
6
Construct Validity Types
Construct Validity
Translation Validity
Criterion-related Validity
Face Validity
Predictive Validity
Content Validity
Concurrent Validity
Convergent Validity
You look at the operationalization and see
whether on its face it seem like a good
translation of the construct.
Discriminant Validity
7
Construct Validity Types
Construct Validity
Translation Validity
Criterion-related Validity
Face Validity
Predictive Validity
Content Validity
Concurrent Validity
Convergent Validity
You check the operationalization against the
relevant content domain for the construct.
Discriminant Validity
8
Construct Validity Types
Construct Validity
Translation Validity
Criterion-related Validity
Face Validity
Predictive Validity
Content Validity
Concurrent Validity
Convergent Validity
You check the operationalizations ability to
predict something it should theoretically be able
to predict
Discriminant Validity
9
Construct Validity Types
Construct Validity
Translation Validity
Criterion-related Validity
Face Validity
Predictive Validity
Content Validity
Concurrent Validity
Convergent Validity
You check the operationalizations ability to
distinguish between groups that it should
theoretically be able to distinguish between.
Discriminant Validity
10
Construct Validity Types
Construct Validity
Translation Validity
Criterion-related Validity
Face Validity
Predictive Validity
Content Validity
Concurrent Validity
Convergent Validity
You examine the degree to which the
operationalization is similar to other
operationalizations to which it theoretically
should be similar
Discriminant Validity
11
Construct Validity Types
Construct Validity
Translation Validity
Criterion-related Validity
Face Validity
Predictive Validity
Content Validity
Concurrent Validity
Convergent Validity
You examine the degree to which the
operationalization is not similar to other
operationalizations to which it theoretically
should not be similar to.
Discriminant Validity
12
Threats to Construct Validity
  • Inadequate Preoperational Explication of
    Constructs
  • It means you did not do a good enough job of
    defining what you mean by construct
  • Mono-Operation Bias
  • It means if you only use a single version of a
    program in a single place at a single point in
    time, you may not be capturing the full picture
    of the construct
  • Mono-Method Bias
  • It means if you only use a single measure or a
    single observation, you may only measure part of
    the construct

13
Threats to Construct Validity (cont)
  • Interaction of Different Treatments
  • Is the observed outcome a consequence of your
    treatment or a combination of separate treatment?
  • Interaction of Testing and Treatment
  • Does testing or measurement itself make the
    observers more sensitive to the treatment?
  • Restricted Generalizability Across Constructs
  • Refer to unintended consequences that threat
    construct validity
  • Confounding Constructs and Levels of Constructs
  • Occur when the observed results are only true for
    a certain level of construct

14
How to assess Construct Validity
  • The Nomological Network
  • The Multitrait-Multimethod Matrix (MTMM)
  • Pattern Matching

15
The Nomological Network
  • It includes a theoretical framework for what you
    are trying to measure, an empirical framework for
    how you are going to measure it, and
    specification of linkages among and between these
    two frameworks
  • It does not provide a practical and usable
    methodology for actually assessing construct
    validity

16
The Multitrait-Multimethod Matrix (MTMM)
  • MTMM is a matrix of correlations arranged to
    facilitate the assessment of construct validity
  • It is based on convergence and discriminant
    validity
  • It assumes that you have several concepts and
    several measurement methods, and you measure each
    concept by each method
  • To determine the strength of the construct
    validity
  • Reliability coefficients should be the highest in
    the matrix
  • Coefficient in the validity diagonal should be
    significantly different from zero and high enough
  • The same pattern of trait interrelationship
    should be seen in all triangles.

17
Pattern Matching
  • It is an attempt to link two patterns
    theoretical pattern and observed pattern
  • It requires that
  • you specify your theory of the constructs
    precisely!
  • you structure the theoretical and observed
    patterns the same way so that you can directly
    correlate them
  • Common Example ANOVA table

18
Reliability
19
Reliability
  • In research, reliability means repeatability or
    consistency
  • True score Theory is a simple model for
    measurement. It assumes that any observation is
    composed of a true value plus an error value X
    T e
  • Measurement errors can be random or systematic
  • To reduce measurement error
  • If you collect data from people, make sure to
    train them
  • Double-check the collected data thoroughly
  • Use statistical procedures to adjust measurement
    error
  • Use multiple measures of the same construct

20
Theory of Reliability
  • Reliability can be expressed as
  • You cannot compute reliability because you cannot
    calculate the variance of true scores!
  • The range of reliability is between 0 and 1
  • The formula for estimating reliability

VAR(T)
VAR(X)
COV(X1, X2)
sd(X1) sd(X2)
21
Types of Reliability Estimation
  • Inter-rater or inter-observer reliability
  • Is used to assess the degree to which different
    raters/observers give consistent estimates of the
    same phenomenon
  • Test-retest reliability
  • Is used to assess the consistency of a measure
    from one time to another
  • Parallel-forms reliability
  • Is used to assess the consistency of the results
    of two tests constructed in the same way from the
    same content domain
  • Internal consistency reliability
  • Is used to assess the consistency of results
    across items within a test

22
Reliability and Validity
23
Thank you
  • Any Questions?

24
Extra slides
25
Single correlation matrix
26
Pattern Matching Example
About PowerShow.com