Abstract: OXFORD Shock of the Old, 7/4/05 Tony Gardner-Medwin, Dept. Physiology, UCL, London WC1E 6BT a.gardner-medwin@ucl.ac.uk - PowerPoint PPT Presentation

1 / 17
About This Presentation
Title:

Abstract: OXFORD Shock of the Old, 7/4/05 Tony Gardner-Medwin, Dept. Physiology, UCL, London WC1E 6BT a.gardner-medwin@ucl.ac.uk

Description:

Abstract: OXFORD Shock of the Old, 7/4/05 Tony Gardner-Medwin, Dept. Physiology, UCL, London WC1E 6BT a.gardner-medwin_at_ucl.ac.uk Why is your institution (probably ... – PowerPoint PPT presentation

Number of Views:77
Avg rating:3.0/5.0
Slides: 18
Provided by: Tony158
Category:

less

Transcript and Presenter's Notes

Title: Abstract: OXFORD Shock of the Old, 7/4/05 Tony Gardner-Medwin, Dept. Physiology, UCL, London WC1E 6BT a.gardner-medwin@ucl.ac.uk


1
Abstract OXFORD Shock of the Old, 7/4/05Tony
Gardner-Medwin, Dept. Physiology, UCL, London
WC1E 6BTa.gardner-medwin_at_ucl.ac.uk
  •  Why is your institution (probably) not using
    confidence-based marking (CBM) in place of
    right-wrong marking for objective tests?  Decades
    of research and a decade of large-scale
    implementation at UCL have shown it to be
    theoretically sound, pedagogically beneficial,
    popular with students and easy to implement with
    both on-line and optical mark reader
    technologies.  If the answer is ignorance, then
    you should look at our FDTL-funded dissemination
    website (www.ucl.ac.uk/lapt). Maybe the answer is
    inertia and the imagined constraints of an
    institutional VLE.  But if you think that CBM
    must somehow be subjective, arbitrary, irrelevant
    to assessment of knowledge and understanding,
    discipline-specific, time-wasting, requiring new
    types of assessment material, or favouring
    particular personalities, then almost certainly
    you need to think or read more deeply about it.
    Within instructional material and formative or
    summative tests it helps reduce some of the very
    sensible regrets that we all have when we are
    forced to replace part of our paper-based
    assessments and small group teaching with
    automated tests and material. If you worry that
    your students simply repeat what they have
    learned - whether in essays or computer tests - 
    without understanding why it is true, then CBM
    can help you discriminate between well-justified
    knowledge, tentative hunches, lucky guesses,
    simple ignorance and seriously confident errors. 
     The presentation will explain what CBM is all
    about, give you experience based on questions
    about the Highway Code, seek audience feedback
    about what you perceive as potential and -
    features, and cover evidence about many of the
    issues raised above. The take away message is
    that you fail in your duty to your students if
    you treat lucky guesses as equivalent to
    knowledge, or serious misconceptions as no worse
    than acknowledged ignorance. Your assessments
    should be something in which you have
    confidence.

2
Gaining Confidence in
Confidence-Based Marking
Tony Gardner-Medwin, Physiology, UCL
www.ucl.ac.uk/lapt
  • What is CBM ? . Why ? . When ?
  • Whats it like to experience CBM?
  • What are possible pros and cons?
  • .. DISCUSSION ..
  • Issues, Data, implementation

Why is your institution (probably) not using
confidence-based marking (CBM) in place of
right-wrong marking for objective tests?
Oxford 4/05
3
What is CBM ?
The LAPT (UCL) Confidence-Based Marking scheme
applied to each answer that will be marked
right/wrong e.g. T/F, MCQ,
EMQs, Numerical, Simple text Confidence Level
1 2 3 Score if
Correct 1 2
3 Score if Incorrect 0 -2 -6
Best marks obtained if
Probability correct lt 67
67-80 gt80 Odds lt
21 gt 21 gt 41
4
Why CBM ? (1) Knowledge is degree of belief,
or confidence
  • knowledge
  • uncertainty
  • ignorance
  • misconception
  • delusion

(2) Students must be able to justify knowledge
relate it to other things, check it and argue
with rigour. Rote learning is the bane of
education.
Knowledge is justified true belief In
teaching we need to emphasise justification. In
assessment we need to measure degrees of belief.
5
With CBM you must think about justification You
gain EITHER if you find justifications for high
confidence OR if you see justifications for
reservation.
6
(No Transcript)
7
When How do we use CBM ?
potentially whenever answers are marked
right/wrong Student study self-assessment,
revision learning materials stand-alone
(PC) or on the web, at home or in
College Formative tests (once-off or
repeat-till-pass, with randomised Qs or
values) e.g. End of Module tests, Maths
Practice/Assessment access portal e.g. via
WebCT, and grades returned e.g. to WebCT Open
access for other universities, schools, etc.
BMAT practice tips, GCSE maths, Biol AL,
Physics, etc. Exams summative assessment (at
UCL) T/F or MCQ, EMQ etc. using Optical Mark
Reader OMR (Speedwell) cards processing
available through UCL
8
The UCL Confidence-Based Marking scheme
applied to each answer that will be marked
right/wrong e.g. T/F, MCQ,
EMQs, Numerical, Simple text Confidence Level
1 2 3 Score if
Correct 1 2
3 Score if Incorrect 0 -2 -6
Best marks obtained if
Probability correct lt 67
67-80 gt80 Odds lt
21 gt 21 gt 41
  • What seem possible benefits () or drawbacks (-)
    to such a scheme?
  • In formative work
  • in exams ?

9
Personality, gender issues real or imagined?
Does confidence-based marking favour certain
personality types?
  • Both underconfidence and overconfidence are
    undesirable
  • Correct calibration is well defined, desirable
    and achievable
  • No significant gender differences are evident (at
    least after practice)
  • Students with confidence problems this is the
    way to deal with it!
  • In exams, we can adjust to compensate for poor
    calibration, so students still benefit from
    distinguishing more/less reliable answers

10
How well do students discriminate confidence?
Mean /- 95 confidence limits, 331 students
11
(No Transcript)
12
Reliability and Validity of Confidence-based exam
marks
Exam marks are determined by 1. the students
knowledge and skills in the subject area 2. the
level of difficulty of the questions 3. chance
factors - how questions relate to details of the
students knowledge and how uncertainties resolve
(luck)
(1) signal (its measurement is the
object of the exam) (3) noise
(random factors obscuring the signal) Confidence
-based marks improve the signal-to-noise ratio
A simple convincing test of this is to compare
marks on one set of questions with marks for the
same student on a different set (e.g. odd even
Q nos.). High correlation means the data are
measuring something about the student, not just
noise.
13
Marks scaled 0chance 100max
The correlation, across students, between scores
on one set of questions and another is higher for
CBM than for simple scores.
But perhaps they are just measuring ability to
handle confidence ?
14
Improvements in reliability and efficiency,
comparing CBM to conventional scores, in 6
medical student exams (each 250-300 T/F Qs, gt300
students).
15
Cronbach Alpha (standard psychometric measure of
reliability) On six exams (mean SEM,
n6) a 0.925 0.007 using CBM a
0.873 0.012 using number of items correct
  • The improvement (Plt0.001, paired t-test)
    corresponds to a reduction of the random element
    in the variance of exam scores from 14.6 of the
    student variance to 8.1.

16
Arriving at a conclusion through probabilistic
inference
17
We fail if we mark a lucky guess as if it were
knowledge. We fail if we mark delusion as no
worse than ignorance.
www.ucl.ac.uk/lapt
Write a Comment
User Comments (0)
About PowerShow.com