From Abstract to Concrete: Operationalization and Measurement - PowerPoint PPT Presentation

1 / 30
About This Presentation
Title:

From Abstract to Concrete: Operationalization and Measurement

Description:

Our comparisons can be accurate only to the extent that the ... Comparing scores on measure being validated with scores on measures from other variables ... – PowerPoint PPT presentation

Number of Views:117
Avg rating:3.0/5.0
Slides: 31
Provided by: www43
Category:

less

Transcript and Presenter's Notes

Title: From Abstract to Concrete: Operationalization and Measurement


1
From Abstract to Concrete Operationalization
and Measurement
  • Sharon Paynter
  • Spring 2007
  • Manheim Ch 5

2
Formulating Theory
  • Empirical Research is a means of obtaining
    answers to questions about our observations
  • Theory often stated in abstract terms
  • Answers we want are concrete and specific
  • How can we quantify our concepts in order to make
    precise statements about whether or not our
    theory is supported by our observations?

3
Processes and definitions
  • Operationalization
  • Process of selecting observable properties to
    represent abstract concepts
  • Instrumentation
  • Specific steps to take in making observation
  • How something will be measured precise,
    standardized indications of extent to which
    characteristic is present
  • Measurement
  • Result of applying an instrument to assign
    numerical values to cases
  • Evidence used in making decisions and answering
    questions.
  • Observation
  • Applying a measuring instrument in order to
    assign values for some characteristic of the
    phenomenon to the cases being studies

4
Observation
  • We can never actually compare concepts, what we
    compare are indicators of concepts
  • Our comparisons can be accurate only to the
    extent that the indicators selected mirror the
    concept we intend them to measure (validity)
  • Improper operationalization poor reflection of
    concept
  • ? faulty conclusion

5
Example Terms
  • Want to test the impact of poverty on educational
    achievement
  • Concept Educational achievement
  • Variable Class rank
  • Indicator Test scores
  • Values Numerical values from 0 to 100
  • Used to compare groups who are impoverished to
    those who are not

6
Multiple Indicators
  • Most social science concepts are multidimensional
    (more than one aspect or component)
  • Our measures should reflect the diversity of the
    concept if they are to be useful indicators.

7
Multiple Indicators Examples
  • Corn
  • Height
  • May be no difference in height, may also examine
  • Stalk width
  • Leaf size
  • Corn Yield
  • Weight of corn ears
  • Democracy
  • Hold Regular Elections
  • Include Iraq under Saddam Hussein
  • What else should we examine to get at what we
    mean?

8
Operational Definitions
  • Specifying a set of procedures to obtain an
    empirical indicator of a concept in any given
    case
  • Must tell us precisely and explicitly what to do
    in order to determine what quantitative value
    should be associated with a variable in any given
    case.

9
Operational DefinitionImportance of Precision
  • Tell others exactly what we have done
  • Evaluate or replicate study
  • Want all measurements collected in exactly the
    same way
  • Research assistants, data collectors
  • If differences in data collection, results will
    not be comparable invalid conclusions
  • Precise and detailed statements of how to
    operationalize a variable will help us in
    evaluating the results we obtain and in
    eliminating rival explanations

10
Operationalization Example
  • Party unity
  • Voting together on roll call votes
  • Which votes to use
  • Procedure for determining how majority of party
    voted on each issue
  • How do we treat abstentions?
  • Procedure for computing and then averaging
    percentages of agreeing votes for each legislator

11
Developing an Instrument
  • Operational definition results in an instrument
    for taking measurements
  • Examples include
  • Series of questions on survey
  • Instructions for observing a certain event (e.g.
    UN debate)
  • Sets of numbers to be taken from sourcebook and
    rules for combining them into a measure

12
Measurement
  • Result of applying an instrument to assign
    numerical values to cases
  • Evidence used in making decisions and answering
    questions.

13
Level of Measurement
  • Classification according to how much information
    it gives us about the phenomena being measured
    and their relationship to one another
  • Nominal
  • Ordinal
  • Interval/Ratio

14
Levels of Measurement
  • Nominal
  • Set of discrete categories to distinguish between
    cases
  • Naming or classifying cases into groups
  • Only allows sorting cases into groups
  • Mutually Exclusive each case can only be
    assigned to a single category
  • Collectively exhaustive all cases can be
    assigned to some category
  • Party Affiliation Democrat, Republican,
    Independent/Other

15
Level of Measurement
  • Ordinal
  • Allow us to associate number with each case
  • Categorize, order and rank cases
  • Relative differences
  • Social Class Lower, Middle, Upper
  • Attitude Strongly Disagree, Agree, Neutral,
    Disagree, Strongly Agree

16
Level of Measurement
  • Interval/Ratio
  • Exact differences between cases
  • Associate a number with each case
  • Standard unit of property being measured
  • Zero Point
  • Interval 0
  • Ratio Absolute 0 (nothing below)
  • Age 0 - ???

17
Ordinal vs Interval/Ratio
  • Under 10,000
  • 10,00019,999
  • 20,000-29,999
  • 30,000-39,999
  • 40,000-49,999
  • Difference in categories 1-10,000
  • Exact salary
  • Open ended question
  • Provide exact comparison between salaries
  • 10,000 ½ of 20,000

18
Stronger Measurement Better
  • You want operationalizations that allow interval
    level measurement whenever possible and
    appropriate
  • Interval provides most information and allows
    mathematical calculations
  • If you use lower level of measurement you may
    be wasting potentially valuable information (you
    can always combine data to make a stronger
    variable)
  • Republican, Democrat, Independent
  • Strong to weak affiliation
  • Level of measurement should be
  • Theoretically defensible
  • Technologically possible (measurement technology)

19
Caveat Measurement Precision
  • Cases where too much precision in measurement is
    undesirable
  • Age and participation in 2002 election
  • Giving up some precision may provide clearer
    results (e.g. 5 yr groupings)
  • Operationalize concepts as precisely as possible
  • Can collapse categories if necessary

20
Outcome of Measurement
  • 2 sources of differences in scores from a
    measurement
  • Real Differences Actual variation in property
    we are examining
  • Measurement Error Differences in values
    assigned to cases that can be attributed to
    anything other than real differences
  • Measure or setting which causes differences
  • Do NOT reflect authentic differences in
    properties we are examining

21
Systematic Measurement Error
  • Constant among cases and studies in which same
    measure is used
  • Confusion in variables
  • Nature of instrument
  • Results invalid
  • Differences (or similarities) our measure reveal
    are not accurate reflections of differences we
    think we can measure

22
Random Measurement Error
  • Affects each application of instrument
    differently
  • Matter of chance due to
  • Transient characteristics in our cases
  • Situational variations in application of
    instrument
  • Mistakes in administration and processing
  • Other factors that vary from one use of
    instrument to the next
  • Makes measures unreliable
  • Cannot consistently get same results

23
Measurement Error Distorting Influences
  • Differences in distribution of other, relatively
    stable characteristics among the cases that are
    unintentionally revealed by our measures
  • Political Ideology Intelligence/Region/Culture
  • Differences in the distribution of temporary
    characteristics among the cases that are
    reflected in our measures
  • Mood, health, events (corruption, disaster)
  • Differences in subjects interpretation of the
    measuring instrument
  • Ambiguously worded (vote in last electionwhich
    one)
  • Differences in the setting in which the measure
    is applied
  • Race, sex, age of interviewer

24
Measurement Error Distorting Influences
  • Differences n the administration of the measuring
    instrument
  • Differences in scores as a result of errors that
    occur during data collection/recording
  • Interviewer misinterprets instructions, poor
    lighting, broken pencils etc.
  • Differences in the processing analysis of data
  • Differences in the way individuals respond to the
    form of the measuring instrument

25
Validity Are we measuring what we think we are
measuring?
  • Extent to which our measures correspond to
    concepts they are intended to reflect
  • Properties of a valid measurement
  • Appropriate Describes a concept suitably
  • Complete Includes correct properties
  • Extent to which differences in scores on a
    measure reflect only differences in the
    distribution of values on the variable we intend
    to measure
  • Main concern is systematic error
  • Depends on knowledge of subject and careful
    analysis of alternative operationalizations
  • Can only be tested after we have collected data

26
Testing Validity
  • Pragmatic or Predictive Validity
  • Assessing validity of a measure from evidence of
    how well it works in allowing us to predict
    behaviors
  • Requires alternative indicator of variables to
    check measures
  • Face Validity
  • On the face of it are there good reasons to
    think that this measure is an accurate gauge of
    the intended characteristic?

27
Testing Validity
  • Construct Validity
  • Extent to which actual relationships between
    scores of various measures are consistent with
    what we expect from our theory
  • External Validity
  • Comparing scores on measure being validated with
    scores on measures from other variables
  • Strength of Alliance Voting together in UN
    Number of trade barriers

28
Testing Validity
  • Internal/Convergent Validity
  • Comparing scores on various measures of SAME
    variable
  • Quality of Street Lighting survey of residents,
    light meter, independent rating, rating pictures
    of other streets
  • Multiple Indicators allows us to test validity of
    measures and increases of obtaining a valid
    measure in first place
  • Discriminant Validity
  • Comparing scores on measures that represent
    different/ opposite concepts

29
Reliability
  • How stable are the values from a measure
  • If not reliable not valid
  • If reliable can still be invalid
  • Measure may be reliable without being valid, but
    cannot be valid without being reliable
  • Only subject to random error

30
Testing Reliability
  • Test-Retest Method
  • Same measure is applied to same set of cases
    multiple times over time
  • Alternative Form Method
  • Different forms of measurement applied to same
    group at the same time
  • Sub sample Method (split-half)
  • Draw one sample divide it into several sub
    samples
  • Give same measure to all sub samples
  • Compare answers from different sub samples

31
Pretesting
  • Used to test data collection procedures
  • Reliability and validity must be established
    before beginning a study
Write a Comment
User Comments (0)
About PowerShow.com