Title: Measuring Post-Licensure Competence
1Measuring Post-Licensure Competence
- The Nursing Performance Profile
2Research Team
- Janine Hinton RN, Ph.D
- Mary Mays Ph.D
- Debra Hagler RN, Ph.D
- Pamela Randolph RN, MS
- Beatrice Kastenbaum RN, MSN, CNE
- Ruth Brooks RN, MS, BC
- Nick DeFalco RN, MS
- Kathy Miller RN, MS
- Dan Weberg RN, MHI
3Support
- Funded by NCSBN CRE Grant
- Supported by
- Scottsdale Community College
- Arizona State University
- Arizona State Board of Nursing
4Statement of the Problem
- A valid reliable practice assessment is needed to
support intervention on the publics behalf when
the pattern of nursing performance results in or
is likely to result in patient harm
5Literature Review
- Medical errors a leading cause of death (IOM,
2000) - Written tests do not directly measure performance
(Auerwarakul, Downing, Jaruratamrong,
Praditsuwan, 2005) - Multiple observations of a nurses performance
have provided evidence of competent practice
(Williams, Klaman, McGaghie, 2003)
6Literature Review
- High-fidelity simulation technology allows the
creation of reproducible scenarios to evaluate
nursing performance (Boulet et.al., 2011
Kardong-Edgren, Adamson, Fitzgerald, 2010) - Nursing and Health care leaders have called for
performance assessments to evaluate competence
and support remediation (Benner, Stupen, Leonard,
Day, 2010 IOM, 2011)
7Purpose of study
- To develop and evaluate a high-stakes simulation
testing process to measure minimally safe nursing
practice competence and identify remediation
needs.
8Methodology
- Needed process to apply sophisticated measures of
validity and reliability - Participants appeared in 3 simulation videos
- 3 subject matter expert rated each video on 41
measures of competency - Raters blind to participant ability, experience,
order of testing - Videos presented a range of safe and unsafe
performance - Obtained ratio level data suitable for
parametric, inferential statistical analysis
9Filming Participant Demographics
- Criterianewly licensed RNs less than 3 years
nursing experience (N21) - Average age32
- 95 female
- 58 white, 16 black, 26 hispanic
- 79 AD 21 BSN
- Less than 3 years experiencemean experience1.05
years - Majority had some experience with simulation 74
10Rater Demographics
- Criteria--BSN and 3 years experience and work in
a role that involves evaluating others (N4) - Average experience12.5
- Age 31-51
- White, female
- Education 3 BSN, 1 MS
11Instrument Development
- Developed and established initial
validity/reliability before funding - TERCAP served as the theoretical framework
(Benner et.al.,2006 Woods Doan-Johnson, 2003) - Survey items on NCSBNs Clinical Competency
Assessment of Newly Licensed Nurses were adapted
(NCSBN, 2007) - Mapped to QSEN competencies
12Categories of Items (TERCAP)
- Professional Responsibility
- Client Advocacy
- Attentiveness
- Clinical Reasoning, noticing
- Clinical Reasoning, understanding
- Communication
- Prevention
- Procedural Competency
- Documentation
13Example of One Item Category Competencies
- Prevention
- Infection control
- 2 client identifiers
- Appropriate positioning
- Safe environment
14Scoring4 possibilities
- Performance or action is consistent with
standards of practice and free from actions that
may place the client at risk for harm - Fails to perform or performs in a manner that
exposes the client to risk for harm - No opportunity to observe in the scenario
- Blank
15Scoring test
- No weighted items
- No pass fail standard
- Description of Nurses performance across 9
categories of competency - Final rating of each item based on inter-rater
agreementat least 2 of 3 agree
16Scenarios
- 3 sets of 3 scenarios scripted9
- Adult acute care, common diagnoses
- Each scenario had opportunities to observe all
performance items - Each sim patient had hospital-like chart with
informationlabs, history, MAR, orders
17Simulation Testing/Rating
- 21 nurse performers and 63 videos
- Scenario Set 15 participants
- Scenario Set 28 participants
- Scenario Set 38 participants
- Each video evaluated by 3 raters
- 189 rating instruments
- 41 items rated on each instrument
- 7,749 ratings
18Analysis Procedures
- Predictive Analysis Software (v 18.0.3 SPSS Inc.,
Chicago, IL) - Frequency analysis to identify instrument
properties - Used as intended
- Interrater reliability
- Sensitive to common practice errors (construct
validity) - Cronbachs alpha (intercorrelation among items)
was used to measure internal consistency
19Analysis Procedures Cont
- ANOVA used to
- Assess ability of instrument to distinguish
between experienced and inexperienced nurses - Assess potential bias created by administration
methods
20Results
- Less than 1 of items left blank or not
observedindicates scenarios comprehensive - Interrater reliabilityacross all 41 items at
least 2 raters agreed on 99.12 - Internal consistency Cronbachs alpha0.91-0.84
on 41 items combined and separate
21Results
- Construct validitypass rates should mirror those
in other studies - Infection controlpass rate 57 mainly due to
lack of hand hygiene - Documentationpass rate 29--area of frequent
concern in practice
22Results
- Criterion validity
- 2 groups by nursing experience
- lt1 year or
- 1-3 years
- 2 way mixed ANOVA
- Experienced nurses made fewer errors than new
nurses (plt0.001) - Significant in 6 of 9 categories
- Attentiveness
- Clinical Reasoning (noticing)
- Clinical Reasoning (understanding)
- Communication
- Procedural Competency
- Documentation
23Comparison of Groups by Category
Category Inexperienced Nurses Inexperienced Nurses Experienced Nurses Experienced Nurses
M (S) M (S) p value
Professional Responsibility -.33 (.58) -.22 (.49)
Client Advocacy -.57 (.81) -.25 (.50)
Attentiveness -.76 (1.00) -.17 (.38) 0.002
Clinical Reasoning - Noticing -1.19 (1.33) -.47 (.81) 0.01
Clinical Reasoning - Understanding -1.67 (1.28) -.81 (.95) 0.005
Communication -1.48 (1.44) -.75 (.97) 0.03
Prevention -1.57 (1.50) -1.31 (.82)
Procedural Competency -2.76 (2.32) -1.19 (1.37) 0.002
Documentation -3.33 (.73) -2.61 (1.20) 0.02
24NPP Results
Inexperienced 0.5 year
1 year experience
Inexperienced 0.5 yr
2 year experience
25Results
- Test Bias
- Scenario was not significant
- Categories was significantsome competency
categories more difficult - Communication, prevention, procedural competency
and documentation more difficult
26Results
- Test Bias continued
- Scenario setsignificant only for documentation
which may be easier on Set 1 - Order of testing and practice effect not
significant - Location of testing not significant
27Summary
- Instrument has adequate validity and reliability
- Raters used instrument as instructed and in a
reproducible manner - Items were highly interrelated
- Sensitive to common errors
- Inexperienced nurses made more errors
- Test not biased
- Plots permit users to visualize performance
28Implications
- Provides a valid explicit measure of performance
that regulatory Boards could use along with other
data to determine if practice errors are a
one-time occurrence or a pattern of high risk
behavior - Potential uses in education and practice to
assess performance and effect of educational
intervention
29Limitations
- Volunteer subjectsnot random or representative
- Sample size too small to support confirmatory
factor analysis of the instruments construct
validity - Tailored to specific context and purpose
- Limitations of simulationnon-verbal and skin
change cues missingsuspend disbelief
30Future Research
- Funded by NCSBN for Phase II
- Criterion Validity by comparing RN self and
supervisor ratings - Compare to education, certification
- Broader cross section of experienced nurses
recruited
31References
- Auewarakul, C., Downing, S. M., Jaturatamrong, U,
and Praditsuwan, R. (2005). Sources of validity
evidence for an internal medicine student
evaluation system An evaluative study of
assessment methods. Medical Education, 39,
276-283. - Benner, P., Sutphen, M., Leonard, V., Day, L.
(2010). Educating Nurses A Call for Radical
Transformation. Stanford, CA Jossey-Bass. - Boulet, J. R., Jeffries, P. R., Hatala, R. A.,
Korndorffer, J. R., Feinstein, D. M., Roche, J.
P. (2011). Research regarding methods of
assessing learning outcomes. Simulation in
Healthcare, 6(7), supplement, 48-51. - Institute of Medicine (IOM) (2011). The Future of
Nursing Leading Change, Advancing Health.
Washington, DC National Academies Press.
32References
- Institute of Medicine. (2000). To err is human
Building a safer system. Washington, DC National
Academies Press - Kardong-Edgren, S., Adamson, K. A., Fitzgerald,
C. (2010). A review of currently published
evaluation instruments for human patient
simulation. Clinical Simulation in Nursing, 6(1).
doi10.1016/j.ecns.2009.08.004 - Williams, R. G., Klamen D. A., McGaghie, W. C.
(2003). Cognitive, social and environmental
sources of bias in clinical performance ratings.
Teaching and Learning in Medicine, 15(4), 270-292.