Validation of Intermediate Measures VIM Study Results - PowerPoint PPT Presentation

1 / 27
About This Presentation
Title:

Validation of Intermediate Measures VIM Study Results

Description:

TABS CGI-cog, CAI. VIM Study Results. Additional Scientific Criteria ... 3. For short forms, the TABS and UPSA were the leading measures because: ... – PowerPoint PPT presentation

Number of Views:72
Avg rating:3.0/5.0
Slides: 28
Provided by: jimm73
Category:

less

Transcript and Presenter's Notes

Title: Validation of Intermediate Measures VIM Study Results


1
Validation of Intermediate Measures (VIM) Study
Results
  • Michael F. Green, PhD
  • UCLA Semel Institute
  • Department of Psychiatry and Biobehavioral
    Sciences, Geffen School of Medicine at UCLA
  • Department of Veterans Affairs, VISN 22 Mental
    Illness, Research, Education and Clinical Center
    (MIRECC)
  • October 27th, 2009
  • Bethesda MD

2
Steps in the VIM Study
1. Select key criteria for selection Selection
Criteria Committee
2. Solicit nominations for intermediate
measures MATRICS-CT
3. Select and categorize nominated measures for
RAND Panel VIM Committee
4. Create data base on criteria for candidate
measures UCLA staff faculty
5. Evaluate measures on criteria with RAND Panel
Method RAND Panelists
6. Select measures for VIM Study VIM Committee
8. Review / summarize VIM results VIM Committee
/ R Kern
9. Public presentation of results Oct 2009
7. Conduct VIM Study Site PIs and VIM Committee
3
VIM Study ResultsStages of Evaluation
1. Scientific criteria (reliability and
validity) - key scientific criteria -
test-retest reliability - correlation with
cognitive performance - additional scientific
criteria 2. Operational criteria - practicality
- tolerability - duration 3. Cross-cultural
considerations
4
Process for Selecting Measures From VIM Study
Full Measures
4. Refined selection of measure(s) for further
consideration
1. Measures evaluated for scientific criteria
2. Initial selection of measure(s) for further
consideration
3. Selected measure(s) evaluated for operational
criteria
Key scientific criteria
Additional scientific criteria
5
Validation of Intermediate Measures (VIM)Study
Results - Recruitment
6
Validation of Intermediate Measures (VIM)Study
Results - Recruitment
Includes 27 subjects who did not meet
eligibility and 3 who withdrew consent 3 of
these subjects had invalid assessments based on
outlier scores and behavioral observations final
sample 163
7
VIM Study ResultsData Preparation
Data Cleaning 1. Comprehensive review of all
missing data. 2. Review all behavioral notes for
validity. 3. Comprehensive review of all scores
outside /- 2 SD. These scores are judged as
valid or not by local site. Data
Distribution Distributions were examined for all
key dependent variables by VIM Committee. - no
transformations needed
8
VIM Study ResultsSample Demographics
Defined as current age minus age of onset
9
VIM Study Results Degree of Independent Living
10
VIM Study ResultsClinical Symptoms at Baseline
11
VIM Study ResultsClinical Stability
No significant differences
12
VIM Study ResultsKey Measures at Baseline
higher is worse
13
VIM Study ResultsCorrelations with Clinical
Symptoms
Note for CAI and CGI cognition, higher is
worse plt .05 p lt .01
14
Process for Selecting Measures From VIM Study
Full Measures
4. Refined selection of measure(s) for further
consideration
1. Measures evaluated for scientific criteria
2. Initial selection of measure(s) for further
consideration
3. Selected measure(s) evaluated for operational
criteria
Key scientific criteria
Additional scientific criteria
15
VIM Study ResultsTest-retest reliability
ILS, CAI gt TABS
inter-rater reliability for CAI ICC .73 VIM
study used same rater 88 of the time
16
VIM Study Results Correlation with Cognitive
Performance
UPSA gt ILS gt CGI-cog. CAI TABS gt CGI-cog, CAI
17
VIM Study Results Additional Scientific Criteria
Utility as a repeated measure / Corr with
functioning
18
VIM Study ResultsPracticality and Tolerability
Practicality and Tolerability on 1-7 scale where
7 best Tolerability CAI, TABS gt UPSA gt
ILS Administration Time ILS gt TABS gt UPSA gt CAI
19
VIM Study ResultsMissing data
20
Process for Selecting Measures From VIM Study
Full Measures
4. Refined selection of measure(s) for further
consideration
1. Measures evaluated for scientific criteria
2. Initial selection of measure(s) for further
consideration
3. Selected measure(s) evaluated for operational
criteria
Key scientific criteria
Additional scientific criteria
5. Selected measure(s) evaluated for cultural
adaptation
Only if measures are not at all culturally
adaptable
Short Forms
1. Measures evaluated for scientific criteria
2. Initial selection of measure(s) for further
consideration
4. Refined selection of measure(s) for further
consideration
3. Other considerations
Key scientific criteria
Additional scientific criteria
21
VIM Study ResultsShort forms Test-retest
reliability
No sig differences among tests
22
VIM Study ResultsShort forms Correlation with
cognitive performance
No sig differences among tests
23
VIM Study ResultsShort forms Additional
Scientific CriteriaUtility as a repeated measure
/ Corr with functioning
24
VIM Study ResultsDifference between short and
long forms
25
VIM Study ResultsStudy Results Implications of
reduced reliability
A very simple hypothetical - effect size of d
.5 exists in reality between 2 groups - total
sample size needed to achieve power .80, alpha
.05, two-tailed at different levels of
reliability
G. Hellemann
26
VIM Study ResultsConclusions
1. VIM Committee followed a clearly defined
process for evaluation of study results for 5
measures TABS, ILS, UPSA, CAI, CGI-cognition 2.
For the full measures, the UPSA was the leading
measure because - good test-retest
reliability - excellent shared variance w/
cognitive performance - good utility as
repeated measure no floor / ceiling effects -
reasonable tolerability and practicality 3. For
short forms, the TABS and UPSA were the leading
measures because - well-defined short forms
- moderate shared variance w/ cognitive
performance - acceptable utility as repeated
measure - but lower test-retest reliability,
requiring larger samples
27
VIM Committee and Site PIs
  • VIM Committee
  • Michael F. Green (chair) UCLA, VISN 22 MIRECC
  • Nina R. Schooler (co-chair) - SUNY Downstate,
    VISN 5 MIRECC
  • Fred Frese - Northeastern Ohio Universities
    College of Medicine
  • Wendy Granberry - GSK
  • Philip D. Harvey Emory University
  • Craig N. Karson - Merck
  • Stephen R. Marder UCLA, VISN 22 MIRECC
  • Nancy Peters Sanofi-aventis
  • Michelle Stewart Pfizer
  • Ellen Stover NIMH
  • VIM Study Site PIs
  • Robert Kern - UCLA, VISN 22 MIRECC
  • Larry Seidman Beth Israel Deaconess / Harvard
  • John Sonnenberg Uptown Research
  • William Stone Beth Israel Deaconess / Harvard
  • David Walling Collaborative Neuroscience Network
Write a Comment
User Comments (0)
About PowerShow.com