Research Methods - PowerPoint PPT Presentation

1 / 216
About This Presentation
Title:

Research Methods

Description:

Research Methods – PowerPoint PPT presentation

Number of Views:309
Avg rating:3.0/5.0
Slides: 217
Provided by: TonyF99
Category:

less

Transcript and Presenter's Notes

Title: Research Methods


1
Research Methods
  • Collecting, Processing and Analyzing Data

2
Aims of the Session
  • The purpose of this session is to
  • Alert you to the different types of methodology
    available to you in your research
  • Make you aware of the different techniques that
    you might use in collecting, presenting and
    analysing data
  • Discuss the different kinds of problems that you
    might encounter in pursuing your research.

3
Contents
  • Developing Your Research Questions
  • The Different Types of Research
  • Selecting Appropriate Research Methods
  • Robustness of Methods
  • Structuring Your Methodology
  • Data Analysis
  • Problems with The Research Process
  • Summary

4
1. Developing your Research Questions
  • This section of the presentation examines what
    you need to do in order to focus your research.

5
The Purpose of Research
  • The purpose of research is to contribute to a
    current academic debate, and possibly to advance
    knowledge in some manner. This means that the
    research that you undertake has to be
  • Embedded in a recognisable field of study, taking
    account of, and drawing on, past research
  • Of interest to other researchers working in the
    same field, and possibly to the wider community
  • Generalisable to more than one individual
    experience or circumstance.

6
Typical Research Structure
Conduct Literature Review
The process of research is well-documented. This
diagram more or less describes the activities you
need to undertake. What we will do in this
session, is to look at some of the elements, and
how they fit together.
Select Research Questions
Devise Methodology Research Instruments
Apply Methods Instruments
Perform Statistical Analysis
Test Hypotheses Draw Conclusions
7
Getting Started
The first thing that you will do is to make sure
you are well-informed, and to pose pertinent
questions which your research will answer.
Conduct Literature Review
Select Research Questions
Devise Methodology Research Instruments
Apply Methods Instruments
Perform Statistical Analysis
Test Hypotheses Draw Conclusions
8
Defining Your Field of Study (1)
  • You undertake a Literature Review in a particular
    field, in order to ensure that your research is
    embedded within that field, and that you are
    taking account of the methods, issues, results,
    theories and conventions which apply.
  • At the end of the literature review, you will
    have narrowed the field down to a relatively
    small topic within the field, and will have some
    unanswered questions which your research will
    try to investigate.
  • These focussed questions form the basis of your
    Research.

9
Defining Your Field of Study (2)
  • Your Research Questions are crucial, as they
    effectively define both the content of your
    research and the manner in which you carry it
    out.
  • Your research methodology is designed
    specifically to attempt to answer these questions
  • The data that you collect will be focussed on
    issues relevant to these questions
  • Your analysis of the data will seek to provide
    answers to the questions
  • Your conclusions will summarise the answers.

10
Research Questions
  • Typical research questions might be
  • To what extent is the business community in India
    aware of the potential of Bluetooth?
  • Is the software currently available for teaching
    arithmetic to 5 year olds appropriate and
    effective?
  • Are there differences in the way that men and
    women approach the task of writing software?
  • How can historical events be modelled effectively
    using VRML?

11
Defining Your Population
  • When framing your Research questions, you need to
    be clear about what set of objects, people or
    events forms the background population in your
    study.
  • Are you saying, for example that the CAL software
    you produce is designed for all English-Speaking
    people, all men, all Afro-Caribbeans, all
    children under 5, all those who have been
    diagnosed as dyslexic, or simply to
    Afro-Caribbean boys under 5 with specific
    learning difficulties?
  • If you are investigating whether on-line learning
    is effective, is your population students,
    University students, UK University students,
    Liverpool Hope Students, Liverpool Hope Computing
    Students or Liverpool Hope MSc. Computing
    Students?

12
Research QuestionsReferences (SW Library)
  • Lewis, Ian. - So you want to do research! a
    guide for beginners on how to formulate research
    questions. - 2nd ed. - Edinburgh Scottish
    Council for Research in Education, 1997. - (SCRE
    publication 2 .. - 1860030327

13
2. The Different Types of Research
  • Here we look at the different options available
    to us in carrying out the research.

14
Your Research Focus
  • The main focus of your research can be
  • Product-based Research, focusing on producing a
    piece of hardware or software (or the designs for
    them) which is at the cutting edge of a
    discipline, drawing on other researchers ideas,
    best practice and what is feasible. In doing
    this, you may need to explore how the product
    will enmesh with current systems, and existing
    and future technologies.
  • People-based research, focusing on the people who
    interact with the hardware or software, looking
    at issues such as usability, user behaviour,
    compatibility of software with current user
    systems and other HCI issues.
  • Both of these approaches are legitimate, and it
    is possible that in carrying out your research
    you might need to use elements of each one.

15
Product v. PeopleWhich focus?
  • The focus of your research is decided by you.
  • It will depend upon how confident you are in
    creating a product at the cutting edge, or how
    comfortable you will be as a researcher in
    dealing with people.
  • When you frame your research questions, you need
    to ensure that their focus leads you into the
    kind of research that you want to do.
  • There is no right answer, but you may find that
    your research will be best carried out by using a
    main focus of one element, with a subsidiary
    focus of another. For example producing a piece
    of software which securely encodes personal data
    as a self-encrypting and decrypting file stored
    on an ID card, may well need trialling with real
    people.

16
Approaches to Research
  • There are two main approaches to doing research
  • Quantitative Research looks for hard numerical
    data it proceeds by counting, measuring and
    summarising. The goal is to look for statistical
    significance in the results.
  • Qualitative Research takes a soft approach to
    data, and has a more descriptive feel. It
    attempts to get to the heart of the matter,
    exploring individual cases in detail, and seeking
    the reasons for behaviour and explanations for
    events.
  • Both of these approaches are legitimate, and it
    is
  • possible to combine elements.

17
Quantitative v. QualitativeWhich approach?
  • The approach you use will depend upon your topic
    and your research questions.
  • It will also depend upon how comfortable you as a
    researcher feel about using these methods.
  • There is no right answer here, and, as we shall
    see in the rest of this presentation, there may
    be good reasons for adopting a variety of
    methods, which encompass both quantitative and
    qualitative approaches

18
Types of Research
  • There are four main types of research that you
    might consider
  • Experimental Research
  • Survey Research
  • Evaluative Research
  • Observational Research
  • All four of these types can incorporate both
    quantitative and qualitative approaches

19
Experimental Research
  • This is normally quantitative, but can take two
    forms
  • An attempt to produce a piece of hardware,
    software or a combination of both, which is at
    the cutting edge of a discipline.
  • An attempt to investigate and document the
    performance of a particular piece technology in
    specific circumstances.
  • This might involve
  • Creating hardware or software applications
  • Devising detailed tests and evaluation procedures
  • Carrying out rigorous testing
  • Evaluating performance or usability

20
Survey Research
  • This research can be qualitative or
    quantitative in the widest sense, you are
    interviewing people. This might involve
  • An unstructured interview
  • A semi-structured interview
  • An structured interview based on questionnaire
    (face to face, or by telephone)
  • An administered questionnaire
  • A Postal Questionnaire

21
Evaluative Research
  • This is primarily qualitative. Here you are
    trying to assess whether something is of value,
    whether it meets its specifications, or whether
    it is fit-for-purpose. This might involve
  • Developing a list of criteria on which to make
    judgements
  • Examining the object against each of the criteria
    to judge to what extent it conforms to
    expectations
  • Weighing the positives and the negatives, coming
    to overall conclusions
  • Matching these judgments against similar
    judgements made elsewhere in the literature or in
    real life.

22
Observational Research
  • This research normally uses a qualitative
    approach in the widest sense, you are recording
    peoples behaviour. This might involve
  • Participating in a task or situation
  • Making field notes of experiences
  • Creating and using an Observation Schedule
  • Making a check-list of occurrences of particular
    events or items.

23
Research possibilities
Qualitative Quantitative
Experiment Field- testing Bench-testing and simulations
Survey Unstructured Interviews Written Questionnaires
Evaluation Using expert judges Evaluation criteria and checklists.
Observation Participant Observation Observation schedules
24
Other Forms of Research
  • Historical Documentary Research proceeds by
    scrutinising existing materials, both written and
    artefacts, using them as sources of evidence.
  • Action Research is normally conducted in an
    educational or political context. Action is
    taken, monitored, evaluated and then modified for
    the next cycle.
  • Ethnographic Research consists of an in-depth
    study of a cultural phenomenon, in order to
    generate new theory.
  • Case Study Research selects a whole range of
    research methods in scrutinising one particular
    context or situation.

25
Research methods 1References (SW Library)
  • Crabtree, Benjamin F. - Doing qualitative
    research. - London Sage, 1992. - (Research
    Methods for Primary Care 3). - 0803943121
  • Creswell, John W.. - Research design
    qualitative and quantitative approaches / John W.
    Creswell. - Thousand Oaks, Calif London Sage,
    1994. - 0803952554
  • Creswell, John W.. - Qualitative inquiry and
    research design choosing among five traditions
    / J. - Thousand Oaks, Calif. London SAGE,
    1998. - 0761901434

26
Research methods 2References (SW Library)
  • Miller, Delbert Charles. - Handbook of research
    design and social measurement. - 3rd ed. - New
    York David McKay Co . Inc, 1977. - m0859739
  • Research methods in education and the social
    sciences / Research Methods in. - Block 3B
    Research design. - Milton Keynes Open
    University Press, 1983. - (DE304, Block 3B 3B).
    - 0335074235
  • Yin, Robert K.. - Case study research design
    and methods. - Rev. ed. - Newbury Park London
    Sage, 1989. - (Applied social research methods
    series v.5). - 080393470x

27
3. Selecting Appropriate Research Methods
  • The next few slides discuss how you might go
    about selecting your research methods from those
    available

28
Selecting Your Methodology
  • Your research methodology consists of
  • Research Methods (experiment, survey etc.)
  • Research Instruments (questionnaire, tests etc.)
  • Analytical Tools (statistics, inductive or
    deductive methods)
  • When selecting the methodology, you need to be
    aware of
  • The Research Questions you are trying to answer
  • The Population you are trying to generalise to.

29
Factors to Consider
How has previous research in this area been done?
Undertake Literature Review
What methods have been used?
Select Research Questions
What research instruments have been devised?
Devise Methodology Research Instruments
How will the methods instruments be applied?
Apply Methods Instruments
What Statistical tests can be carried out?
Perform Statistical Analysis
What Research Hypotheses can be tested?
Test Hypotheses Draw Conclusions
30
Appropriate methodology
  • Do your Research Questions involve impressions,
    attitudes, opinions, beliefs or knowledge held by
    people?
  • If so, then survey research is appropriate
  • Do your Research Questions involve behaviour,
    actions, reactions to events, circumstances or
    objects?
  • If so, then observational research is appropriate

31
Appropriate methodology
  • Do your Research Questions involve the
    reliability or robustness of hardware, software,
    systems or infrastructure?
  • If so, then an evaluative study is appropriate
  • Do your Research Questions involve the testing of
    hardware or software at the technical level,
    speed, accuracy, security etc?
  • If so, then experimentation is appropriate

32
A Mix of Methods
  • It may be that your research questions overlap
    some of these categories, or different questions
    address more than category.
  • If so, you should consider a mix of methods, that
    ensure that you cover all eventualities. This may
    bring added benefits. (See Triangulation)

33
Mixed MethodsReferences (SW Library)
  • Mixing methods qualitative and quantitative
    research / edited by Julia Bra. - Aldershot
    Avebury, 1995. - 1859721168
  • Tashakkori, Abbas. - Mixed methodology
    combining qualitative and quantitative approaches
    / Abba. - Thousand Oaks, Calif. London Sage,
    1998. - (Applied social research methods series
    v.46). - 0761900705

34
4. Robustness of Methods
  • As well as ensuring that your questions are
    well-focussed, and your methods relevant and
    appropriate, you need to ensure that your methods
    are also Reliable and Valid

35
Reliability Validity
  • Research is Reliable if different methods,
    researchers and sample groups would have produced
    the same results.
  • Research is Valid if the results produced by the
    research are accurate portrayals of the
    situation, explanations are effective, and
    predictions from the research are actually borne
    out by observation.

36
Reliability
  • Research can have poor reliability, if it on one
    or two cases only, or if personal judgement or
    opinion is included.
  • Reliability can be improved if data collection
    methods are made more precise, we have controlled
    experimentation, and we can produce statistical
    summaries.

37
Validity
  • Research can have poor validity if the data
    produced is too far removed the object under
    study, or the respondent. Using detailed, highly
    structured research instruments can lead to
    distortions, by forcing observations into
    categories when they do not fit. Data summaries
    and averaging can also lead to distortions, and
    meaningless generalities. Graphical
    representations and percentages can be highly
    selective and produce biased findings.
  • Validity can be improved by working directly with
    individuals or objects, focusing on specific
    cases, making detailed observations, conducting
    face to face interviews, taking detailed
    measurements in specific circumstances etc.

38
Reliability v. Validity
Valid, but not reliable
39
Reliability v. Validity
Reliable, but not Valid
40
Methodological Trade-Off
  • If you improve validity, you will almost
    certainly reduce reliability.
  • If you improve reliability it will be at the cost
    of reducing your validity.
  • The trade-off is to balance the two so that the
    benefits of using particular methods outweigh the
    losses incurred.

41
Triangulation
  • Triangulation takes its name from the
    navigational method of positioning a ship at sea
    by making two independent observations.
  • The purpose here is to use two distinct
    methodologies, independent of one another to
    confirm that the effects which we are observing
    are real, and not artefacts of the research
    process.

42
Triangulation
  • Triangulation attempts to counter the
    methodological trade-off by using a mix of
    methodologies.
  • If you are using highly structured, statistical
    or measurement-based research, you supplement
    this with detailed observations or face to face
    interviewing.
  • If your research is mainly based on individuals
    or on single items, you ensure that at least part
    of it has some statistical summaries, structured
    observations or questionnaires.

43
Validity ReliabilityReferences (SW Library)
  • Kirk, Jerome. - Reliability validity in
    qualitative research. - Beverly Hills Calif
    Sage Pubns, 1986. - (Qualitative Research Methods
    Series 1). - 0803924704
  • Litwin, Mark S. - How to measure survey
    reliability and validity. - London Sage, 1995.
    - (The Survey Kit 7). - 0803957041

44
5. Structuring Your Methodology
  • This section looks in detail at the techniques
    that you might employ in the Research Activity
    itself.

45
Research Structure
Conduct Literature Review
After framing your Research Questions, and
selecting your methodology, you should test that
this is going to work by conducting a small-scale
Pilot Study
Select Research Questions
Devise Methodology Research Instruments
Apply Methods Instruments
Perform Statistical Analysis
Test Hypotheses Draw Conclusions
46
5a Pilot Studies
  • A pilot study is a set of preliminary
    investigations procedures carried out prior to
    the main research, to ensure that the research is
    possible and can proceed without hitches.

47
Pilot Study (1)
  • In almost every type of Research, you will need
    to devise or adapt some sort of Research Method
    or Instrument.
  • This might be a measuring procedure, a set of
    evaluation criteria, an interview procedure or
    questionnaire, or an observational method or
    schedule.

48
Pilot Study (2)
  • Your procedure or instrument should be based on
    best practice from previous research.
  • It is unlikely that you will find exactly what
    you need you will be forced to adapt or amend
    it.
  • This means that you will need to conduct a Pilot
    Study to test whether the new instrument is
    fit-for-purpose.

49
Pilot Study (3)
  • The main purpose of a Pilot Study is to iron out
    bugs in procedures, or to check that
    instruments work.
  • The size of the pilot study will depend on how
    inventive you needed to be.
  • If your procedures are almost entirely of your
    own devising, then you will need a fairly
    extensive pilot study to check them.
  • If you have lifted methods from the literature,
    than your pilot can be quite small.

50
Pilot Study (3)
  • With questionnaires observation schedules, you
    will need to check that individual items are
    giving you expected results.
  • With test procedures, you need to check that you
    can actually do what you have said that you are
    going to do.
  • With evaluation criteria, you need to use these
    in a limited context, to see that they are
    workable and effective.

51
Pilot Study (4)
  • As a result of the Pilot Study, you need to
    evaluate procedures and instruments, making
    amendments where necessary.
  • The Pilot Study stage will be part of your
    research you will need to write this up,
    reporting on how your methods were adapted and
    improved as a result.

52
Carrying Out the Research
Undertake Literature Review
Select Research Questions
Devise Methodology Research Instruments
Here we will examine how particular methods and
instruments can be applied in the research
situation.
Apply Methods Instruments
Perform Statistical Analysis
Test Hypotheses Draw Conclusions
53
5b A detailed look at some Research methods
  • Experimental Design
  • Evaluative Research
  • Observational Research
  • Survey Research

54
5c.1 Experimental Design
  • This section looks at the different ways in
    which you can conduct experiments. Note that you
    do not need to be doing pure experimental
    research to adopt these methods.

55
Experimental Designs
  • You may need to think about experimental design
    even if you are doing types of research other
    than Experiments
  • These designs occur where you have made some
    change, and are trying to find out its effect.
  • The net result is that you are comparing one
    thing, or one group with another thing or group.

56
Experimental Designs
  • The terminology for experimental designs comes
    from agricultural experiments.
  • We have different treatments which we apply to
    different groups
  • We control the groups for different factors

57
Experimental Designs
  • Experiments normally involve independent and
    dependent variables
  • Independent variables are factors that can be
    controlled for, like temperature, file sizes,
    age, gender etc.
  • Dependent variables are those factors which will
    change as a result of altering the independent
    variables.
  • For example, download times will increase as a
    result of increasing file sizes.
  • Download time dependent variable,
  • File size independent variable

58
Pre-Test/Post TestControl Group Design
  • This is the classic experimental design.
  • It allows you to split your sample into two
    distinct parts (A B)
  • You give the same test to the two groups before
    you start
  • You treat one of the groups
  • You apply the same test afterwards.

Group A (treated)
Group B (untreated)
Pre-test
Pre-test
Treatment
Post-test
Post-test
59
Control Group
  • The Control Group (B), is one which is the same
    in all respects, except for the fact that we make
    no changes.
  • We can use the test measurements on the control
    group as a benchmark for any changes we make to
    the treatment group (A)

60
Example 1 Server Testing
  • Suppose we wish to check whether a firewall is
    effective in blocking external attacks.
  • Set up System A System B on two different
    servers, running near-equivalent internal
    external programs.
  • Devise a test which includes a full range of
    possible attacks apply to both A B. Take
    series of measurements observations.
  • Incorporate firewall into System A
  • Reapply test to both systems, which are again
    running near-equivalent programs.

61
Example 2 Software Design
  • Suppose we wish to find out whether
    anthropomorphic agents are useful in
    communicating with novice users of a website.
  • Set up Website A Website B with
    near-equivalent structure, but different
    content.
  • Devise a procedure which asks two equivalent sets
    of users to make explorations of the websites
    apply to both A B. Make a series of
    observations
  • Incorporate Agents into Website A
  • Re-apply procedure, asking two more equivalent
    sets of users to make the same explorations of
    both Websites. Again make observations.

62
Example 3 On-Line Learning
  • Suppose we wish to find out whether training is
    effective in helping students cope with the
    demands of on-line learning.
  • Set up group A group B with near-equivalent
    members.
  • The two groups to undertake a short programme of
    learning on-line, which incorporates a short
    evaluation and a knowledge test.
  • Give Training to Group A
  • The two groups to undertake a further short
    programme of learning on-line, again
    incorporating a short evaluation and knowledge
    test.

63
Factorial Designs
  • This is where we examine the effects of two or
    more independent factors simultaneously.
  • For two factors, we would need 4 groups
  • Group A (Control no treatment)
  • Group B (Factor 1 treatment only)
  • Group C (Factor 2 Treatment only)
  • Group D (Factors 1 2 Treatments)
  • Clearly, this is going to increase the complexity
    and size of the research, but it has the added
    benefit of producing verifiable results in cases
    where two factors interact to produce interesting
    effects.

64
Other Design Variants
  • Post-Test only Control Group Design
  • Here we assume that both groups are the same (but
    do not test that assumption). We simply apply the
    treatment to one group, and apply the test to
    both groups.
  • Matched Pairs Design
  • The individuals we use in the test groups
    (subjects) are matched for characteristics which
    are likely to affect the outcome (gender, age,
    level of education, ethnicity etc.)

65
Experimental DesignsReliability Issues
  • For high reliability, we need to build in strict
    controls over each of the independent variables
    affecting the outcomes of the experiment.
  • We will also need to ensure that any measurements
    taken of the dependent variables are as accurate
    as possible.
  • We also need to ensure that our methodology is
    clear and replicable.

66
Experimental DesignsValidity Issues
  • For high validity, we need to conduct the
    experiment in as natural a setting as possible,
    and in as near as exact a match to the
    circumstances in which the events or objects
    would normally operate.
  • If a sample is being used, then the sample
    (whether time, events, people or objects) should
    be as representative of the background
    population as possible, and we should make
    detailed observations of how the events unfold as
    well as the final measurements.

67
Experimental DesignReferences (SW Library)
  • Campbell, Donald T., Donald Thomas, 1916-. -
    Experimental and quasi-experimental designs for
    research / Donald T. Campbell. - Boston London
    Houghton Mifflin, 1963. 0395307872
  • Field, Andy. - How to design and report
    experiments / Andy Field, Graham Hole. - London
    SAGE, 2003. - 0761973826
  • Miller, Steve. - Experimental design and
    statistics. - London Methuen, 1975. -
    (Essential Psychology A8). - m0805407

68
5c.2 Evaluative Research
  • This section looks at the methods and issues
    surrounding evaluation. You may need to use such
    techniques if evaluation is implicit in your
    research questions

69
Evaluative Research
  • Evaluative Research covers those cases where you
    are attempting to compare whether one procedure
    or object is better or more effective than
    another procedure or object, or to determine
    whether a particular procedure or object is
    fit-for-purpose.
  • Evaluation needs to be done against a set of
    criteria, which have been established as valid
    and reliable in this context. You would normally
    produce an evaluation form to be completed by
    respondents.

70
Evaluation Criteria
  • To establish criteria for evaluation, we need to
    break the topic down into individual elements,
    and state the different sub-topics on which the
    item is to be evaluated.
  • Alongside this, we will normally state specific
    questions which will need to be answered in order
    to judge the item against the criterion.
  • The answers to the questions may involve
    measurements, counts, assessments on subjective
    scales, statements of fact or possibly even
    opinion.

71
Example Website Evaluation Criteria
  • Design
  • Is the use of colour acceptable?
  • Are the elements in harmony?
  • Navigation
  • Do the links work?
  • How many links per page?
  • Content
  • How many words graphics on page?
  • Is the text interesting informative?
  • Interactivity
  • What interactive features are used?
  • Do these improve communication with the user?
  • Coding
  • What scripting is used? Is it clearly annotated?

72
Example Website Evaluation Form
  • 1. Design
  • Colour Safe Pallet used? Yes/ No
  • Harmony of elements on page good/ neutral/ poor
  • 2. Navigation
  • Number of links on the page ____
  • Number of working links ____
  • 3. Content
  • Total size of graphics files on page ____ MB
  • Is the text boring/ dull/ neutral/
    interesting/ fascinating
  • 4. Interactivity
  • Tick and/or name all interactive features used
  • rollovers dynamic images image maps ______,
    ______
  • 5. Coding
  • What DTD has been used? _______________________

73
Evaluation CriteriaValidity Issues
  • For criteria to be valid, each of the criteria
    elements need to have face or content validity
    the questions posed by the criterion needs to be
    relevant to the object, and relates to a feature
    of it
  • We also need to be able to answer the questions
    in an objective manner, without recourse to
    guessing, or giving an impressionistic response.

74
Evaluation CriteriaReliability Issues
  • For criteria to be reliable, each of the criteria
    elements need to be clear and unambiguous, so
    that different assessors would interpret the
    criteria in the same way.
  • We also need to ensure that repetition of the
    evaluation exercise will yield results which are
    not dissimilar to one another.

75
Evaluation MethodsReferences (SW Library)
  • Britain, Sandy. - A framework for pedagogical
    evaluation of virtual learning environments. -
    Manchester Joint Information Systems Committee,
    1999. - (JISC Technology Applications Programme
    r.. - M0000712EL
  • Broadbent, George Ernest. - The role of
    evaluation in user interface design a practical
    implementation. - Liverpool University of
    Liverpool, 1997. - p7270369
  • Redmond-Pyle, David. - Graphical user interface
    design and evaluation (GUIDE) a practical
    process. - London Prentice Hall, 1995. -
    013315193x
  • Smeltzer, Nicholas. - Critical analysis of the
    design and evaluation of a computer-based
    project. - Liverpool University of Liverpool,
    2001. - M0002704LO

76
5c.3 Observational Research
  • This section describes what you need to do if
    your research involves making detailed
    observations of people, events or objects.

77
Observational Research
  • In this case you are trying to determine
  • Either How an individual or group of individuals
    react, interact or behave in particular
    circumstances
  • Or How software or hardware performs when used
    by particular groups of people

78
Participant Observation
  • In this case, the researcher becomes one of the
    subjects, and works alongside the subject,
    monitoring behaviour and interacting with them.
  • ProsResearcher can fully understand what the
    issues are. Can get real validity to research.
  • Cons Researcher can influence the research,
    alter opinions. Highly subjective unreliable.
    Taking notes is difficult relies on good memory.

79
Non-Participant Observation
  • In this case, the researcher studies the
    situation apart from subjects, taking notes,
    monitoring behaviour and observing interactions.
    The use of audio video recording is useful
    here.
  • ProsResearcher can get overview of the
    situation, and achieve objectivity
  • Cons Researcher can only see resulting
    behaviours, not what is causing it or why it is
    happening.

80
Observation Schedules
  • An observation schedule can be a simple list of
    things to look for in a particular situation
  • It can be far more complex a minute by minute
    count of events such a mouse-clicks or verbal
    interactions between subjects.

81
Observation ScheduleAn Example
  • Observation of subject using Information Portal
  • Subject M / F Age 18-21 / 21-30 / 31-50 / 50
  • Date _________ Time _________
  • Selected Navigation Tool Mouse /Keyboard/
    TouchScreen
  • First 5 pages visited in order __ __ __
    __ __
  • Time to obtain to obtain required information
    ___ min ___ sec
  • Total Number of Pages visited __
  • Feedback from subject
  • very positive / positive / neutral / negative /
    very negative

82
Observation SchedulesSome Reliability Issues
  • For an observation schedule to be reliable, it
    should require structured documentation of
    events.
  • This will involve such things as checklists,
    minute-by-minute categorisation of activity and
    numerical data such as frequencies of occurrence
    and time intervals. Timings should have a clear
    start and end points.
  • The schedule should leave little room for
    subjective judgement.

83
Observation SchedulesSome Validity Issues
  • For an observation schedule to be valid, it
    should refer to events which actually happen
    each of the events on the sheet should be
    possible, and likely to occur.
  • Timings should be possible to take, and not
    interfere with other observations which should be
    made.
  • There should be room for observations which
    enrich the data by adding detail to the
    numbers, offering explanation and illumination.

84
Observational ResearchReferences (SW Library)
  • Harding, Jacqueline. - How to make observations
    assessments / Jackie Harding and Liz
    Meldon-Smith. - 2nd ed. - London Hodder
    Stoughton, 2000. - 034078038x
  • Robertson, Kevin. - Observation, analysis and
    video / editors Anne Simpkin and Penny
    Crisfield. - Leeds National Coaching
    Foundation, 1999. - 1902523164
  • Simpson, Mary. - Using observations in
    small-scale research a beginner's guide / Mary
    Simpson. - Glasgow Scottish Council for
    Research in Education, 1995. - (SCREpublications
    16 130). - 1860030122

85
5c.4 Survey-Type Research
  • This section describes what you need to do in
    order to use human respondents to provide you
    with information.

86
Survey methods
  • There are two distinct elements here
  • The interview techniques that you adopt in order
    to elicit information from people
  • The sampling methods that you adopt in order to
    select respondents for interview
  • You will need to make rational choices for both
    of these elements, depending upon the focus of
    your research and the population under study.

87
Interviewing
  • We interview people face-to-face in order to try
    to find out exactly what they think.
  • With the right questions, people respond with
    high quality information.
  • The data that you get can be of high validity,
    since you have access to respondents own words.

88
Types of Interviews
  • Unstructured Interview
  • Structured Interview
  • Semi-Structured Interview
  • Administered Questionnaire

89
Open or Closed?
  • When interviewing respondents, the main choice
    facing the researcher is whether to use open
    questions, which leave the respondent free to
    answer in any way they think fit, or closed
    questions which force the respondent to make
    particular choices, pre-determined by the
    researcher.
  • OPEN What is your experience of chat rooms?
  • CLOSED Do you think chat rooms should be
    monitored? (Yes/No/Maybe)

90
Open or Closed?Validity Reliability Issues
  • In general, the data from open questions is
    richer, more illuminating, and more valid, as it
    can illustrate clearly why particular subjects
    think the way that they do, or why they behave in
    particular ways.
  • Data from closed questions however, is more
    amenable to statistics. It is easier to
    summarise, spot trend and make comparisons. In
    general the data is more reliable.
  • A good strategy is to use a mixture of both open
    and closed questions

91
Unstructured Interviews
  • Unstructured Interview
  • Interviewer has no set agenda the object is to
    get the respondent to talk freely about various
    topics.
  • Pros Can be high quality data, often get real
    insights into what people really think.
  • Cons Can be difficult for the novice researcher
    to carry out need to be good at steering the
    conversation without forcing it. Time consuming
    can only carry out a small number. Difficult to
    analyse

92
Semi- Structured Interview
  • Interviewer has a formal list of topics, which
    sets the agenda however, the interviewer is free
    to take these in different orders, or to return
    to topics at different points.
  • Here we include the idea of focus groups, where
    a facilitator encourages the discussion of a
    particular topic.
  • Pros Data provided can be almost as good as
    unstructured interview, but in a more focussed
    manner.
  • Cons Can miss important ideas, because agenda
    set beforehand interviews can also be time
    consuming difficult to manage analyse
    afterwards.

93
Structured Interviews
  • Interviewer has a list of topics to be taken in a
    particular order questions are written, and read
    out.
  • Pros You can make good comparisons between
    different respondents you also will have access
    to respondents thinking.
  • Cons Unless you have done some preliminary
    investigations, and extensively piloted the
    interviews, you may miss lots of important data.
    Takes time to do properly can only carry out a
    few.

94
Administered Questionnaire
  • Interviewer has devised a list of questions which
    are read out some multiple choice, some
    open-ended. Filled in either by respondent or
    interviewer.
  • Pros Interview can be quite brief easy to make
    comparisons, data amenable to statistical
    analysis.
  • Cons Data may be warped by the choice of
    questions. Respondent may be forced into giving
    fabricated answers.

95
Questionnaires
  • There are many different types of questionnaire,
    depending upon what you are trying to find out.
  • As well as gathering factual data about the
    person, you may be trying to explore their
  • Knowledge
  • Beliefs
  • Attitudes
  • Opinion
  • Behaviour

If you need to design a questionnaire, see for
example http//www.leeds.ac.uk/iss/documentation/
top/top2/index.html
96
Question Types
  • Factual Data
  • Binary (Yes/No)
  • Single Selection from Categories
  • Multiple Selection from Categories
  • Attitude Scale items
  • Focussed items to elicit categories
  • Conditional Questions for routing
  • Open-ended Questions
  • Self-Reporting

97
Writing Questions
See http//www.analytictech.com/mb313/principl.ht
m
  • Put Factual Data (demographics) at end
  • Put Explanations and Disclaimers at the start
  • Keep questions as brief as possible
  • Use non-technical language where possible
  • Avoid leading respondents towards particular
    answers
  • Use direct questions, no hypotheticals
  • Use simple questions, no portmanteaus
  • Use a mixture of positively and negatively worded
    questions
  • Verify data by asking same question in different
    ways.

98
Questionnaire Example
  • 1. Which operating system are you currently
    using
  • Linux Windows Other
  • 2. How would you rate the operating system?
  • very poor poor good very good
  • 3. Circle the tasks which you use your computer
    to do
  • Word processing Internet Access Program
    Development
  • 4. Estimate the number of hours you use the
    computer for each week
  • ______ hours
  • 5. Do you use broadband?
  • Yes No

99
Protocol
  • Whatever your interviewing method, as part of
    the ethical constraints on Hope Researchers, you
    are required to do the following
  • Explain to the respondents what the research is
    about.
  • Tell them that they will not be identified by
    name in the final report, and that any views that
    they express will be in confidence.
  • Explain to them that if there are questions with
    which they feel uncomfortable, they do not have
    to answer.

100
Attitude Measurement
  • You may wish to incorporate attitude
    measurement into your questionnaire as either a
    major or a minor feature.
  • There are several different types of scales which
    can be used to elicit numerical measurements of
    attitude
  • Likert Scaling
  • Thurstone Scaling
  • Guttman Scaling
  • Semantic Differential Scaling
  • See http//www.socialresearchmethods.net/kb/scalg
    en.htm

101
Likert Scales (1932)
  • This consists of items like I would not trust a
    banks website to keep my details secure
  • Respondents are asked to use a scale such as
    1Strongly disagree 2disagree 3neutral
    4agree 5strongly agree
  • Scales can vary 0-4, 1-7 etc.
  • Some proponents suggest removing the neutral
    category to force a choice, but this can reduce
    validity.
  • Scores on individual items are totalled to obtain
    the respondents score this is often shown as an
    average.

102
Thurstone Scales (1928)
  • This pre-ranks 11 statements with numerical
    values 1-11 each statement carries a numerical
    score, for example
  • 1 Teleworking is an impossible concept
  • 2 Teleworking may be OK in exceptional cases
  • 8 If offered the opportunity, I would try
    teleworking
  • 11 In 20 years time, everyone will be
    teleworking
  • Clearly, lots of preparatory work needs to be
    done to generate and rank the statements.
  • A respondents score, is the average numerical
    value of all the items they agree with.

103
Guttman Scales (1944)
  • This pre-ranks statements in order, very similar
    to Thurstone scaling, except here we attempt to
    construct the scale so that if a person agrees
    with item 4 on the scale, they will also agree
    with items 1,2 and 3.
  • When the questions are administered, the items
    are muddled, but each retains a ranking.
  • The respondents score is the sum of the ranks
    associated with the items he or she agreed with.

104
Osgoods (1957)Semantic Differential Scales
  • Respondents are asked to rate an idea or an
    object against a series of opposing adjectives or
    descriptions, for example
  • Using Learnwise
  • Exciting Dull
  • Hard Easy
  • Frustrating
    Stimulating
  • The scale asks respondents to tick the box
    nearest the descriptor that they agree with. Each
    box has a numerical value e.g. 1,2,3 7
  • The respondents score can be portrayed
    graphically, or as a total of numerical values.

105
Questionnaires InterviewsReliability Issues
  • For questionnaire results to be reliable, you
    need closed questions that are precisely framed,
    unambiguous, with response categories that are
    well-defined, exhaustive and exclusive. You need
    to ask the same question in different ways, and
    you need to collate the information into
    statistical summaries and expose the data to
    rigorous statistical testing.
  • You need to use as large a sample as possible,
    and even out any random fluctuations in the data
    by using summary statistics and hypothesis
    testing.

106
Questionnaires InterviewsValidity Issues
  • For questionnaire results to be valid, you need
    response categories which enable the respondent
    to express their views accurately this can best
    be done with open questions. You should take time
    with the respondent and respect the data that
    they provide for you.
  • You need to ensure that the sample is
    representative of the population, and large
    enough for the results to be statistically
    significant.

107
Questionnaire References On-Line
  • http//www.tardis.ed.ac.uk/kate/qmcweb/qcont.htm
  • http//www.leeds.ac.uk/iss/documentation/top/top2.
    pdf
  • http//www.statpac.com/surveys/
  • http//www.fao.org/docrep/W3241E/w3241e05.htmchap
    ter20420questionnaire20design
  • http//www.surveysystem.com/sdesign.htm

108
Questionnaire References (1)(SW Library)
  • Foddy, William. - Constructing questions for
    interviews and questionnaires theory and
    practice. - Cambridge Cambridge University
    Press, 1993. - 0521467330
  • Fowler, Floyd J.. - Improving survey questions
    design and evaluation. - London Sage, 1995. -
    0803945833
  • Frazer, Lorelle. - Questionnaire design and
    administration a practical guide / Lorelle
    Frazer. - Brisbane Wiley, 2000. - 0471342920
  • Gillham, W. E. C., William Edwin Charles, 1936-.
    - Developing a questionnaire. - London
    Continuum, 2000. - (Real world research). -
    0826447953

109
Questionnaire References (2) (SW Library)
  • Oppenheim, A. N., Abraham Naftali, 1924-. -
    Questionnaire design, interviewing and attitude
    measurement / A.N. Oppenheim. - New ed. - London
    Continuum, 2000. - 0826451764
  • Wengraf, Tom. - Qualitative research interviewing
    biographic narrative and semi-structured
    interviews - London SAGE, 2001. - 0803975007
  • Young, Pauline V. - Scientific social surveys and
    research an introduction to the background,. -
    Englewood Cliffs, N.J. Prentice-Hall, 1966. -
    (Prentice-Hall Sociology Series). - m0859969
  • Youngman, Michael Brendan. - Designing and
    analysing questionnaires. - Maidenhead, Berks
    TRC Rediguide. - (Rediguide 12). - m0891542

110
Sampling
  • The question of how to select the respondents for
    the sample is always tricky.
  • The principle behind sampling, is that you should
    ensure that the sample is appropriate in order
    for you to generalise your results to the
    population under study.
  • There are essentially three ways in which this is
    done
  • Random Sampling
  • Quota Sampling
  • Stratified Random Sampling

111
Random Sampling
  • This method requires you to get a list of all the
    population, as near complete as you can find,
    then use some random selection method (such as
    shutting your eyes and stabbing a pen at the
    list, or allocating using random numbers)
  • The idea is that every person in the population
    has an equal chance of ending up in the sample.
  • Most statistical methods assume that you are
    sampling randomly, and it is the only method
    which overall is guaranteed to ensure that
    samples are free from bias, and therefore provide
    validity.

112
Quota Sampling
  • Here, you identify particular sectors of the
    population, such as men, women , those under 21,
    over 21, employed, unemployed etc., and put
    quotas on the number of people in each category.
  • For example, in a quota sample of 12, we might
    have
  • 2 males under 21 2 females under 21
  • 4 males over 21 4 females over 21
  • The purpose here is not to get a sample which
    represents the population in its entirety, but to
    ensure that views from important sections of the
    population are represented.

113
Stratified Random Sampling
  • One of the problems with Random Sampling, is that
    if you take small samples, you can very well end
    up with a biased sample , by chance.
  • In Stratified Random sampling, you would measure
    what proportions of the population lie in each
    category or strata, and select your sample so
    that you get precisely those proportions in those
    strata
  • For example in your population you might have
  • 15 males under 21 10 females under 21
  • 30 males over 21 45 females over 21
  • If you select a sample of 200 you would need
    stratified samples
  • 30 males under 21 20 females under 21
  • 60 males over 21 90 females over 21
  • The allocation of subjects to samples should be
    done randomly.
  • The purpose here is to get a valid sample which
    which represents the population in its entirety.

114
Other Sampling Methods
  • Snowball Sampling This method is used whenever
    you are dealing with a tricky subject, where
    people may be doing devious or illegal activities
    (for example hacking, virus creation etc.). In
    this case, you would use one respondent to
    suggest the name of another who might be willing
    to be interviewed that one would lead to several
    others and so on.
  • Cluster Sampling This method might be used if
    you have a diverse population (such as those
    living in African urban communities). Here you
    would randomly select particular cities, and
    sample within neighbourhoods within those cities.
  • Convenience Sampling This is used mainly for
    investigative research. It relies on the fact
    that people appearing at a particular location or
    time will do so, at random (may not be true).
    We use this fact to stand in one place and
    interview them.

115
Sampling On-Line References
  • This is an easy introduction
  • http//www.csm.uwe.ac.uk/pwhite/SURVEY1/node26.ht
    ml
  • This is more complex and technical
  • http//www.socialresearchmethods.net/kb/external.h
    tm

116
6. Data Analysis
  • This section of the presentation looks at the
    different types of data available, and how this
    can be analysed

117
Analysis of Research Data
Undertake Literature Review
Select Research Questions
Devise Methodology Research Instruments
Here we examine how the data produced by the
research can be presented effectively and
statistically analysed.
Apply Methods Instruments
Perform Statistical Analysis
Test Hypotheses Draw Conclusions
118
Data Analysis
  • 6a Types Of Data
  • 6b Extracting Data for Analysis
  • 6c Presentation of Data
  • 6c Principles of Statistical Testing
  • 6d Selected Tests
  • 6e Analytical Tools

119
6a Types of Data
  • In this section we describe the different types
    of data that you might encounter, and what you
    might do with it in your study.

120
Types of Data
  • Measurement-type data, given in units measured
    on a decimal scale. e.g. MB/sec, times, averages
    and other statistical summary values.
  • Counted data number of occurrences of events
    or objects e.g. number of pages visited
  • Ranking data subjective impressions, such as
    marks out of 10, rankings etc. e.g. Interface
    rated as very poor (1), poor (2), good (3) or
    very good (4).
  • Categorical data Names of different things or
    categories, such as those named by an interviewee
    (NB these may be counted) e.g. Manager A
    discussed 3 concerns security, access and
    reliability
  • Descriptive data, such as that produced by
    interviews or observational field-notes e.g.
    Subject A tried to click on all the blue text,
    even though it was not underlined.

121
Measured Data
  • Such data is called continuous, ratio scale,
    or real number data
  • This type of data is easiest to use it can be
    arithmetically manipulated (added, subtracted,
    multiplied, divided, summarised)
  • it is also likely to have good, predictable
    statistical features.
  • There are lots of methods to pick from.
  • The inclusion of valid measured data will add
    reliability to your study.

122
Counted Data
  • Such data is normally called discrete,
    interval scale, or integer data
  • It can be added, subtracted and averaged.
  • Counted data, has almost all the features of
    measured data, especially if the numbers of the
    count are large (above 30 or so).
  • For small numbers (e.g. less than 30 observations
    in total), methods are limited.
  • The inclusion of valid counted data will almost
    certainly improve the reliability of your study.

123
Ranking Data
  • Such data is normally called interval scale
    data data.
  • In order to use it effectively, there are several
    techniques the usual one is to convert the data
    into numerical values (e.g. Yes 1, No 0)
  • When this has been done, it can be added,
    subtracted and averaged.
  • Interval data needs care the data looks like
    counts where the numbers are small, but are
    simply subjective impressions. To deal with such
    data we need to summarise large volumes of it
    data.
  • The inclusion of valid counted data, suitably
    summarised can add reliability to your study.

124
Categorical Data
  • Such data is normally called nominal data.
  • In order to use such data effectively, the
    objects, classes or ideas that these categories
    represent should have the items within them
    counted, evaluated on an interval scale and/or
    summarised.
  • A preliminary study may have found, for example
    that there are 4 main uses for spreadsheets in
    business Finance, Timesheets, Inventory and
    General Calculation. Businesses may be asked to
    rank these in order of importance, or to rate
    their usefulness.
  • Categorical data is much more difficult to deal
    with. Processes are time consuming and highly
    subjective There are relatively few statistical
    tools which deal effectively with categorical
    data.
  • The inclusion of valid Categorical data,
    suitably summarised can add both validity and
    reliability to your study.

125
Descriptive Data
  • Descriptive data is simply a verbal description
    of something that happened, or what a respondent
    did, or a quote from an interview.
  • It is the most difficult to deal with
    effectively. There are methods for extracting
    summary information from such data (see Grounded
    Theory Glazer Strauss) however, these are not
    for the novice researcher in IT, as they are very
    time consuming.
  • The best use for such data is for illustration
    and supporting evidence elsewhere.
  • The inclusion of appropriate Descriptive data,
    suitably selected can improve the overall
    validity of your study.

126
Types Of DataReferences (SW Library)
  • Research methods in education and the social
    sciences / Research Methods - Block 5
    Classification and measurement. - Milton Keynes
    Open University Press, 1979. - (DE304 a third
    .. - 0335074405
  • Research methods in education and the social
    sciences / Research Methods. - Block 6 Making
    sense of data. - Milton Keynes Open University
    Press, 1979. - (DE304 a third level course. -
    0335074413

127
Types of Data References (SW Library)
  • Chatfield, Christopher. - Statistics for
    technology a course in applied statistics. -
    2nd. - London Penguin Books, 1978. - 0412157500
  • Kranzler, Gerald. - Statistics for the terrified
    / Gerald D. Kranzler, Janet P. Moursund. -
    Englewood Cliffs, N.J. Prentice Hall London
    Prentice-Hall International, 1995. 0131838318
  • Pentz, Mike. - Handling experimental data. -
    Milton Keynes Open U.P., 1988. - 0335158242
  • Salkind, Neil J.. - Statistics for people who
    (think they) hate statistics. - Thousand Oaks,
    Calif. London Sage, 2000. - 0761916210

128
6b Extracting Data For Analysis
  • Data produced by the research process does not
    have to be numerical in order to be of value.
    However, statistical techniques work best where
    the data uses numerical values, or can be
    converted to numerical scales.

129
Survey Data
  • Questionnaire data should be the easiest data to
    extract. Ideally the questionnaire will have been
    designed with the methods of analysis in mind, so
    that questions are framed in such a way as to
    facilitate graphical representation and
    statistical testing.
  • Interview Data can be more problematic you may
    need to invent categories, and to count up
    instances of different cases, both in the same
    interview, and with different subjects.

130
Questionnaire Data Examples
  • Percentage of Respondents using Broadband 82
  • In the survey, 93 of people used their computer
    to access the internet, compared to 37 who used
    it for word-processing, and 7 who used it to
    develop programs.
  • 87 of Linux users rated their operating system
    as good or very good, compared to 64 of Windows
    users.
  • The average number of hours spent on the computer
    each week by men was 9.3, compared to 3.7 for
    women.

131
Interview Data Examples
  • 5 out of the 6 interviewees were using
    Broadband.
  • During the interview, all interviewees mentioned
    the positive benefits of moving to broadband. For
    example one respondent said My daughter can use
    the phone while I am replying to emails.
    Altogether there were 25 instances in the
    interviews where benefits were named, as opposed
    to only three cases where drawbacks were noted.
  • These cases broke down as follows
  • 20 of cases referred to accessibility issues
  • 60 of cases referred to improved speed
  • 20 of cases referred to improved compatibility
  • However, it must be noted that all the drawbacks
    were cited by a respondent who reportedly used
    the internet for over 70 hours per week. The
    other five respondents use was an average of
    15.7 hours, with totals ranging from 4 hours to
    25 hours.

132
Observational Data
  • Pure observational field notes are unlikely to
    contain numerical data. As with interview data,
    you may have to invent appropriate categories
    an
Write a Comment
User Comments (0)
About PowerShow.com