Visit the Sheffield PandA Lab:
https://sites.google.com/sheffield.ac.uk/panda-lab/home
If a measure is unreliable and you happen to find very low within-group variability, the low variability might have happened accidently. So whoever tries to replicate will probably get a different outcome.
If a measure is unreliable and you happen to find very low within-group variability, the low variability might have happened accidently. So whoever tries to replicate will probably get a different outcome.
If our DV is highly unreliable, then effect sizes are small by default because low reliability=noise. The results of our between-group comparison are just not very replicable.
The only solution is huge sample sizes to allow for smaller effects.
If our DV is highly unreliable, then effect sizes are small by default because low reliability=noise. The results of our between-group comparison are just not very replicable.
The only solution is huge sample sizes to allow for smaller effects.
1. If we have no reliability (test-retest r = 0), that means that any correlation we're finding is not replicable. After all, if a measure is not correlated with itself, how can it correlate with any other measure?
Weak reliability = less replicable correlation
1. If we have no reliability (test-retest r = 0), that means that any correlation we're finding is not replicable. After all, if a measure is not correlated with itself, how can it correlate with any other measure?
Weak reliability = less replicable correlation
This is what I found from a quick search (I can't find if the second was published):
pmc.ncbi.nlm.nih.gov/articles/PMC...
core.ac.uk/download/pdf...
This is what I found from a quick search (I can't find if the second was published):
pmc.ncbi.nlm.nih.gov/articles/PMC...
core.ac.uk/download/pdf...
Excel, who asked you to round up the seconds???
Excel, who asked you to round up the seconds???