ReCentering Psych Stats: Psychometrics
BOOK COVER
PREFACE
Copyright with Open Access
ACKNOWLEDGEMENTS
1
Introduction
1.1
What to expect in each chapter
1.2
Strategies for Accessing and Using this OER
1.3
If You are New to R
1.3.1
Base R
1.3.2
R Studio
1.3.3
R Hygiene
1.3.4
tRoubleshooting in R maRkdown
1.3.5
stRategies for success
1.3.6
Resources for getting staRted
2
Questionnaire Construction: The Fundamentals
2.1
Navigating this Lesson
2.1.1
Learning Objectives
2.1.2
Planning for Practice
2.1.3
Readings & Resources
2.2
Components of the Questionnaire
2.3
What Improves (or Threatens) Response Rates and Bias?
2.3.1
Should Likert-type scales include a midpoint?
2.3.2
Should
continuous rating scales
be used in surveys?
2.3.3
Should Likert-type response options use an ascending or descending order?
2.3.4
Should surveys include negatively worded items?
2.4
Construct-specific guidance
2.5
Surveying in the Online Environment
2.6
In my Surveys
2.6.1
Demographics and Background Information
2.6.2
Survey Order
2.6.3
Forced Responses
2.7
Practice Problems
3
Be a QualTRIXter
3.1
Navigating this Lesson
3.1.1
Learning Objectives
3.1.2
Planning for Practice
3.1.3
Readings & Resources
3.1.4
Packages
3.2
Research Vignette
3.3
Qualtrics Essentials
3.4
Qual-TRIX
3.5
Even moRe, particularly relevant to iRb
3.6
intRavenous Qualtrics
3.6.1
The Codebook
3.6.2
Using data from an exported Qualtrics .csv file
3.6.3
Tweaking Data Format
3.7
Practice Problems
4
Psychometric Validity: Basic Concepts
4.1
Navigating this Lesson
4.1.1
Learning Objectives
4.1.2
Planning for Practice
4.1.3
Readings & Resources
4.1.4
Packages
4.2
Research Vignette
4.3
Fundamentals of Validity
4.4
Validity Criteria
4.4.1
Content Validity
4.4.2
Face Validity: The “Un”validity
4.4.3
Criterion-Related Validity
4.4.4
Construct Validity
4.4.5
Internal Consistency
4.4.6
Structural Validity
4.4.7
Experimental Interventions
4.4.8
Convergent and Discriminant Validity
4.4.9
Incremental Validity
4.4.10
Considering the Individual and Social Consequences of Testing
4.5
Factors Affecting Validity Coefficients
4.6
Practice Problems
4.6.1
Problem #1: Play around with this simulation.
4.6.2
Problem #2: Conduct the reliability analysis selecting different variables.
4.6.3
Problem #3: Try something entirely new.
4.6.4
Grading Rubric
4.7
Homeworked Example
4.7.1
Check and, if needed, format data
4.7.2
Create a correlation matrix that includes the instrument-of-interest and the variables that will have varying degrees of relation
4.7.3
With convergent and discriminant validity in mind, interpret the validity coefficients; this should include an assessment about whether the correlation coefficients (at least two different pairings) are statistically significantly different from each other.
4.7.4
With at least three variables, evaluate the degree to which the instrument demonstrates incremental validity (this should involve two regression equations and their statistical comparison)
5
Reliability
5.1
Navigating this Lesson
5.1.1
Learning Objectives
5.1.2
Planning for Practice
5.1.3
Readings & Resources
5.1.4
Packages
5.2
Defining Reliability
5.2.1
Begins with Classical Test Theory (CTT)
5.2.2
Why are we concerned with reliability? Error!
5.2.3
The Reliability Coefficient
5.3
Research Vignette
5.4
A Parade of Reliability Coefficients
5.4.1
Reliability Options for a Single Administration
5.4.2
Reliability Options for Two or more Administrations
5.4.3
Interrater Reliability
5.5
What do we do with these coefficients?
5.5.1
Corrections for attenuation
5.5.2
Predicting true scores (and their CIs)
5.5.3
How do I keep it all straight?
5.6
Practice Problems
5.6.1
Problem #1: Play around with this simulation.
5.6.2
Problem #2: Use the data from the live ReCentering Psych Stats survey.
5.6.3
Problem #3: Try something entirely new.
5.6.4
Grading Rubric
5.7
Homeworked Example
5.7.1
Check and, if needed, format the data
5.7.2
Calculate and report the alpha coefficient for a total scale score and subscales (if the scale has them)
5.7.3
Subscale alphas
5.7.4
Calculate and report ωt and ωh
5.7.5
With these two determine what proportion of the variance is due to all the factors, error, and g.
5.7.6
Calculate total and subscale scores.
5.7.7
Describe other reliability estimates that would be appropriate for the measure you are evaluating.
6
Item Analysis for Educational Achievement Tests (Exams)
6.1
Navigating this Lesson
6.1.1
Learning Objectives
6.1.2
Planning for Practice
6.1.3
Readings & Resources
6.1.4
Packages
6.2
Research Vignette
6.3
Item Analysis in the Educational/Achievement Context
6.3.1
And now a quiz! Please take it.
6.4
Item Difficulty
6.4.1
Percent passing
6.4.2
Several factors prevent .50 from being the ideal difficulty level
6.5
Item Discrimination
6.5.1
Index of Discrimination
6.5.2
Application of Item Difficulty and Discrimination
6.6
In the
psych
Package
6.6.1
A Mini-Introduction to IRT
6.7
Closing Thoughts on Developing Measures in the Education/Achievement Context
6.8
Practice Problems
7
Item Analysis for Likert Type Scale Construction
7.1
Navigating this Lesson
7.1.1
Learning Objectives
7.1.2
Planning for Practice
7.1.3
Readings & Resources
7.1.4
Packages
7.2
Introducing Item Analysis for Survey Development
7.2.1
Workflow for Item Analysis
7.3
Research Vignette
7.4
Step I: Corrected item-total correlations
7.4.1
Data Prep
7.4.2
Calculating Item-Total Correlation Coefficients
7.5
Step II: Correlating Items with Other Scale Totals
7.6
Step III: Interpreting and Writing up the Results
7.7
A Conversation with Dr. Szymanski
7.8
Practice Problems
7.8.1
Problem #1: Play around with this simulation.
7.8.2
Problem #2: Use raw data from the ReCentering Psych Stats survey on Qualtrics.
7.8.3
Problem #3: Try something entirely new.
7.8.4
Grading Rubric
7.9
Bonus Reel:
7.10
Homeworked Example
7.10.1
Check and, if needed, format and score data
7.10.2
Report alpha coefficients and average inter-item correlations for the total and subscales
7.10.3
Produce and interpret corrected item-total correlations for total and subscales, separately
7.10.4
Produce and interpret correlations between the individual items of a given subscale and the subscale scores of all other subscales
7.10.5
Traditional Pedagogy Items
7.10.6
APA style results section with table
7.10.7
Explanation to grader
EXPLORATORY
FACTOR
ANALYSIS
8
Principal Components Analysis
8.1
Navigating this Lesson
8.1.1
Learning Objectives
8.1.2
Planning for Practice
8.1.3
Readings & Resources
8.1.4
Packages
8.2
Exploratory Principal Components Analysis
8.2.1
Some Framing Ideas (in very lay terms)
8.3
PCA Workflow
8.4
Research Vignette
8.5
Working the Vignette
8.5.1
Three Diagnostic Tests to Evaluate the Appropriateness of the Data for Component-or-Factor Analysis
8.5.2
Principal Components Analysis
8.5.3
Specifying the Number of Components
8.5.4
Component Rotation
8.5.5
Component Scores
8.6
APA Style Results
8.7
Back to the FutuRe: The relationship between PCA and item analysis
8.7.1
Calculating and Extracting Item-Total Correlation Coefficients
8.8
Practice Problems
8.8.1
Problem #1: Play around with this simulation.
8.8.2
Problem #2: Conduct a PCA with another simulated set of data in the OER.
8.8.3
Problem #3: Try something entirely new.
8.8.4
Grading Rubric
8.9
Homeworked Example
8.9.1
Check and, if needed, format data
8.9.2
Conduct and interpret the three diagnostic tests to determine if PCA is appropriate as an analysis (KMO, Bartlett’s, determinant)
8.9.3
Determine how many components to extract (e.g., scree plot, eigenvalues, theory)
8.9.4
Conduct an orthogonal extraction and rotation with a minimum of two different factor extractions
8.9.5
Conduct an oblique extraction and rotation with a minimum of two different factor extractions
8.9.6
Determine which factor solution (e.g., orthogonal or oblique; which number of factors) you will suggest
8.9.7
APA style results section with table and figure of one of the solutions
8.9.8
Explanation to grader
9
Principal Axis Factoring
9.1
Navigating this Lesson
9.1.1
Learning Objectives
9.1.2
Planning for Practice
9.1.3
Readings & Resources
9.1.4
Packages
9.2
Exploratory Factor Analysis (with a quick contrast to PCA)
9.3
PAF Workflow
9.4
Research Vignette
9.5
Working the Vignette
9.5.1
Data Prep
9.5.2
Principal Axis Factoring (PAF)
9.5.3
Factor Rotation
9.5.4
Factor Scores
9.6
APA Style Results
9.6.1
Comparing FA and PCA
9.7
Going Back to the Future: What, then, is Omega?
9.8
Comparing PFA to Item Analysis and PCA
9.9
Practice Problems
9.9.1
Problem #1: Play around with this simulation.
9.9.2
Problem #2: Conduct a PAF with another simulated set of data in the OER.
9.9.3
Problem #3: Try something entirely new.
9.9.4
Grading Rubric
9.10
Homeworked Example
9.10.1
Check and, if needed, format data
9.10.2
Conduct and interpret the three diagnostic tests to determine if PAF is appropriate as an analysis (KMO, Bartlett’s, determinant)
9.10.3
Determine how many components to extract (e.g., scree plot, eigenvalues, theory)
9.10.4
Conduct an orthogonal rotation with a minimum of two different factor extractions
9.10.5
Conduct an oblique rotation with a minimum of two different factor extractions
9.10.6
Determine which factor solution (e.g., orthogonal or oblique; which number of factors) you will suggest
9.10.7
APA style results section with table and figure of one of the solutions
9.10.8
Explanation to grader
Confirmatory Factor Analysis
10
CFA: First Order Models
10.1
Navigating this Lesson
10.1.1
Learning Objectives
10.1.2
Planning for Practice
10.1.3
Readings & Resources
10.1.4
Packages
10.2
Two Broad Categories of Factor Analysis: Exploratory and Confirmatory
10.2.1
Common to Both Exploratory and Confirmatory Approaches
10.2.2
Differences between EFA and CFA
10.2.3
On the relationship between EFA and CFA
10.3
Exploring a Standard CFA Model
10.3.1
Model Identification for CFA
10.3.2
Selecting Indicators/Items for a Reflective Measurement
10.4
CFA Workflow
10.4.1
CFA in
lavaan
Requires Fluency with the Syntax
10.4.2
Differing Factor Structures
10.5
Research Vignette
10.5.1
Modeling the GRMSAAW as Unidimensional
10.5.2
Modeling the GRMSAAW as a First-Order, 4-factor model
10.6
Model Comparison
10.6.1
APA Results Section (so far…)
10.7
A concluding thought
10.8
Practice Problems
10.8.1
Problem #1: Play around with this simulation.
10.8.2
Problem #2: Use simulated data from other lessons.
10.8.3
Problem #3: Try something entirely new.
10.8.4
Grading Rubric
10.9
Homeworked Example
10.9.1
Prepare data for CFA (items only df, reverse-scored)
10.9.2
Specify and run a unidimensional model
10.9.3
Specify and run a single-order model with correlated factors
10.9.4
Narrate adequacy of fit with
\(\chi ^{2}\)
, CFI, RMSEA, SRMR (write a mini-results section)
10.9.5
Compare model fit with
\(\chi ^{2}\Delta\)
, AIC, BIC
10.9.6
APA style results with table(s) and figure
10.9.7
Explanation to grader
11
CFA: Hierarchical and Nested Models
11.1
Navigating this Lesson
11.1.1
Learning Objectives
11.1.2
Planning for Practice
11.1.3
Packages
11.2
CFA Workflow
11.3
Another Look at Varying Factor Structures
11.4
Revisiting Model Identification
11.4.1
Identification Status
11.4.2
Identification in Practice
11.5
Research Vignette
11.6
A Quick
lavaan
Syntax Recap
11.7
Comparing and Tweaking Multidimensional First-Order Models
11.8
An Uncorrelated Factors Model
11.8.1
Specifying the Model
11.8.2
Interpreting the Output
11.8.3
Partial Write-up
11.9
A Correlated Factors Model
11.9.1
Nested Models
11.9.2
Interpreting the Output
11.9.3
Partial Write-up
11.10
Model Respecification
11.10.1
Respecifying with Correlated Errors
11.10.2
Respecifying with Crossloadings
11.11
Modeling the GRMSAAW as a Second-Order Structure
11.11.1
Interpreting the Output
11.11.2
Partial Write-up
11.12
Modeling the GRMSAAW as a Bifactor Model
11.12.1
Interpreting the Output
11.12.2
Partial Write-up
11.13
Another Look at Omega
11.13.1
Omega
h
for Bifactor Models
11.13.2
\(\omega_{h}\)
for Second Order Models
11.13.3
Partial Write-up
11.14
Preparing an Overall APA Style Results Section
11.15
A Conversation with Dr. Keum
11.16
Practice Problems
11.16.1
Problem #1: Play around with this simulation.
11.16.2
Problem #2: Use simulated data from other lessons.
11.16.3
Problem #3: Try something entirely new.
11.16.4
Grading Rubric
11.17
Homeworked Example
11.17.1
Prepare data for CFA (items only df, reverse-scored)
11.17.2
Specify and run a unidimensional model
11.17.3
Narrate adequacy of fit with
\(\chi ^{2}\)
, CFI, RMSEA, SRMR (write a mini-results section)
11.17.4
Specify and run a single-order model with correlated factors
11.17.5
Narrate adequacy of fit with
\(\chi ^{2}\)
, CFI, RMSEA, SRMR (write a mini-results section)
11.17.6
Specify and run a second-order model
11.17.7
Narrate adequacy of fit with
\(\chi ^{2}\)
, CFI, RMSEA, SRMR (write a mini-results section)
11.17.8
Specify and run a bifactor model
11.17.9
Narrate adequacy of fit with
\(\chi ^{2}\)
, CFI, RMSEA, SRMR (write a mini-results section)
11.17.10
Compare model fit with
\(\chi ^{2}\Delta\)
, AIC, BIC
11.17.11
Calculate omega hierarchical (
\(\omega_{h}\)
) and omega-hierarchical-subscales (
\(\omega_{h-ss}\)
)
11.17.12
APA style results with table(s) and figure
11.17.13
Explanation to grader
12
Invariance Testing
12.1
Navigating this Lesson
12.1.1
Learning Objectives
12.1.2
Planning for Practice
12.1.3
Readings & Resources
12.1.4
Packages
12.2
Invariance Testing (aka Multiple-Samples SEM or Multiple-Group CFA [MG-CFA])
12.2.1
Introducing the Topic and the Terminology
12.2.2
Evaluation Strategies
12.2.3
Invariance Testing Workflow
12.2.4
Successive Gradations of Measurement Invariance
12.2.5
Tests for Model Comparison
12.2.6
Partial measurement invariance
12.3
Research Vignette
12.4
Whole-Group and Baseline Analyses
12.4.1
Whole Group CFA
12.4.2
Interpreting the Output
12.4.3
Partial Write-up
12.4.4
Baseline Model when Severity = Mild
12.4.5
Baseline Model when Severity = Severe
12.5
Configural Invariance
12.5.1
Interpreting the Output
12.5.2
Partial Write-up
12.6
Weak Invariance
12.6.1
Interpreting the Output
12.6.2
Partial Write-up
12.7
Strong Invariance
12.7.1
Interpreting the Output
12.7.2
Partial Write-up
12.8
Strict Invariance
12.8.1
Interpreting the Output
12.8.2
Partial Write-up
12.9
Partial Measurement Invariance
12.10
APA Style Write-up of the Results
12.10.1
Measurement Invariance Across Disability Severity
12.11
Practice Problems
12.11.1
Problem #1: Play around with this simulation.
12.11.2
Problem #2: Adapt one of the simulated data sets.
12.11.3
Problem #3: Try something entirely new.
12.11.4
Grading Rubric
12.12
Homeworked Example
Check and, if needed, format data
Specify, evaluate, and interpret the CFA for the entire sample (making no distinction between groups). Write up the preliminary results.
Specify, evaluate, and interpret the CFA for configural invariance. Write up the preliminary results.
Specify, evaluate, and interpret the CFA for the entire sample (making no distinction between groups). Write up the preliminary results. {-} Specify, evaluate, and interpret the CFA for weak invariance. Conduct the analysis to compare fit the weak and configural models. Write up the preliminary results.
Specify, evaluate, and interpret the CFA for strong invariance. Conduct the analysis to compare fit the strong and weak models. Write up the preliminary results.
Specify, evaluate, and interpret the CFA for strict invariance. Conduct the analysis to compare fit the strict and strong models. Write up the preliminary results.
Create an APA style results section. Do not report any invariance tests past the one that failed. Include a table(s) and figure(s).
13
Hybrid Models
13.1
Navigating this Lesson
13.1.1
Learning Objectives
13.1.2
Planning for Practice
13.1.3
Readings & Resources
13.1.4
Packages
13.2
Introducing the Statistic
13.3
Research Vignette
13.4
Importing and Preparing the Data
13.4.1
Analyzing and Managing Missingness
13.4.2
Assessing the Distributional Characteristics of the Data
13.4.3
Preliminary Analyses
13.4.4
Summary of Data Preparation
13.5
The Measurement Model: Specification and Evaluation
13.6
The Structural Model: Specification and Evaluation
13.6.1
Model Identification
13.6.2
Specifying and Evaluating the Structural Model
13.6.3
APA Style Write-up of the Results
13.7
Practice Problems
13.7.1
Problem #1: Download a fresh sample
13.7.2
Problem #2: Swap one or more of the variables
13.7.3
Problem #3: Try something entirely new.
13.7.4
Grading Rubric
13.8
Homeworked Example
Describe and draw the research model you will evaluate
Import the data and format the variables in the model.
Analyze and manage missing data.
Assess the distributional characteristics of the data.
Conduct appropriate preliminary analyses (
M
s,
SD
s,
r
-matrix).
Specify and evaluate a
measurement
model.
Specify and evaluate a
structural
model
13.8.1
APA style results with table(s) and figure.
14
Additional Simulations
14.1
iBelong Scale
14.2
Identity Threat
14.3
Anti-Racism Behavioral Inventory
REFERENCES
Published with bookdown
ReCentering Psych Stats: Psychometrics
Confirmatory Factor Analysis