Dan McNeish
banner
dmcneish.bsky.social
Dan McNeish
@dmcneish.bsky.social
Quant Psyc professor at Arizona State. Into clustered data, latent variables, psychometrics, intensive longitudinal data, and growth modeling.

https://sites.google.com/site/danielmmcneish
These kind of models make lots of assumptions, so make sure not to skip the limitations section if you're considering something like this!

/4
April 2, 2025 at 9:19 AM
Trying the model out on the motivating empirical data and made a huge difference, changing the sign and conclusion about the intervention effect (2nd and 3rd row in the image, left vs. right column).

/3
April 2, 2025 at 9:19 AM
The paper basically takes the Diggle-Kenward model from growth model in tries to jam it into a multilevel autoregressive model/DSEM.

Some simulations showed that it worked well, was much better than models that assume MAR when data are MNAR, and that it recovers true values pretty well

/2
April 2, 2025 at 9:19 AM
Recent papers from personality cited in the “Directly using items as predictors” section of the paper below basically argue that it doesn’t matter what items measure as long as they predict a relevant outcome (which sounds like predictive > other validity)

link.springer.com/article/10.1...
Practical Implications of Sum Scores Being Psychometrics’ Greatest Accomplishment - Psychometrika
This paper reflects on some practical implications of the excellent treatment of sum scoring and classical test theory (CTT) by Sijtsma et al. (Psychometrika 89(1):84–117, 2024). I have no major disag...
link.springer.com
February 21, 2025 at 6:55 PM
This work was part of a project funded by the US Dept of Education/IES, which has been a major supporter of pure methods/statistics/psychometrics work in US so that people like me don't have to beg substantive people to tack a methods aim onto an empirical grant

/end
February 18, 2025 at 9:15 AM
Goal is hopefully to help researchers be a little more articulate about reporting reliability of scale scores and incorporate more recent ideas from the psychometric literature on conditional reliability when it may be appropriate and complement summary indices like alpha or omega

/6
February 18, 2025 at 9:15 AM
The output also provides a number between 0 and 100.

Values close to 100 indicate that alpha/omega represent most scores well.

Values close to 0 indicate that scores have heterogeneous reliability and a summary does not describe some of the sample very well.

/5
February 18, 2025 at 9:15 AM
Result is a plot that looks like this -- the conditional reliability at each score (the colored line; color indicates how many people are at that scores) is plotted against the alpha/omega summary index (black line)

/4
February 18, 2025 at 9:15 AM
Shiny input looks like this -- upload the data, identify the scale items, the desired coefficient, and choose a method from which to calculate the "reliability representativeness" (different methods discussed in the paper)

/3
February 18, 2025 at 9:15 AM
Basic idea borrows conditional reliability from IRT literature and compares the discrepancy of the conditional reliability function to a single summary like alpha/omega.

Shiny app to implement the method is located at dynamicfit.app/RelRep/

/2
Reliability Representativeness
dynamicfit.app
February 18, 2025 at 9:15 AM
Yes, I think you'd have use Bayesian methods in Mplus. I also don't think you could do a continuous time version in Mplus because I don't think that they've added support for binary variables in continuous time (although I might be behind on what is supported!)
January 17, 2025 at 5:54 PM
The OSF link is here if you’re interested, osf.io/be8h3/

It’s intensive longitudinal data where where the outcome is a binary self-report question on binge eating. There’s 50% missingness and a suspected MNAR process where people don’t respond to the binge eating question when they binge eat
Missing Not at Random Intensive Longitudinal Data with Dynamic Structural Equation Models
Hosted on the Open Science Framework
osf.io
January 15, 2025 at 5:33 PM
I made this switch a few years ago and the another thing that came up was that R (at least lme4 ) gives a lot more convergence warnings and errors than SAS, even when the output is identical. McCoach (2018, JEBS) studied this systematically and found similar results.
December 5, 2024 at 8:08 PM