-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Effect sizes not calculated from F (and t?) values #68
Comments
@lottiegasp can you provide a screenshot or more description of the issue? I'm not yet very familiar with the effect size calculation, but if I can see an example of what's not working, I may be able to assist. Thanks! |
For example, NaturalSpeechSegmentation_Neuro should have 26 effect sizes, but on the MetaLab website only 6 show up. These are the six where means and standard deviations are included (rows 17-22, columns BB to BE), but 20 should be able to be calculated from F values (column BI) (and there are 8 rows where no effect size could be calculated because none of means, SDs, t or F are available, this is normal). The effect size calculation code is meant to include an if-then structure such that effect sizes are calculated from means and SDs if they are available, but if not they are calculated from the included t or F values (see bolded code below. this is the code I used in my meta-analysis, which was based on the code from a previous MetaLab study). But there must be an error whereby the effect sizes are not being calculated from t or F values for(line in 1:length(db$n_1)){ |
@christinabergmann and @shotsuji we figured out that in the case of Fleur's neuro segmentation dataset, effect sizes aren't being calculated because there is no corr values in the spreadsheet, and the code is such that if corr values exist, new values are imputed based on existing values. But if no corr values originally exist there is no alternate corr imputation method, see the code below Code from tidy_dataset line 21#Impute values for missing correlations Then from compute_es line 40} else if (participant_design == "within_two") { Should we perhaps adjust the code so that if no corr values exist in the dataset, then the corr values across all the MetaLab datasets are used to impute corr values in the dataset? Relatedly, I noticed that the MetaLab script's current method for imputing missing corr values is to randomly select from other existing values in the dataset (see in italics above). For my study I imputed from a normal distribution using the median and variance of existing corr values and then adjusting the range to [-1,1]. Rabagliati et al. (2018) imputed it as the mean weighted by sample size. Are you happy with current method for imputing corr, or should we discuss what is currently considered to be the best practice? |
Oh, good catch. How about the following:
|
Thanks @christinabergmann @erikriverson as a quick solution, are you able to work on some code that for each row of data extracts corr values from all MetaLab datasets with the same methods, and similar age (maybe where mean_age is +/- 15 days of the mean_age of the row in question that is missing a corr value?) into an object (named below CORRS_FROM_ALL_DATASETS) and then imputes a corr value in the same way? #Impute values for missing correlations Let me know if this makes sense. Then at a later time I can work on 2. and 3. as suggested by Christina. This will likely be in Feb after the conference. |
It looks like rows with effect sizes on the website only show up if they have been calculated from means and SDs, not from F or t values. (I have checked that this is the case in Fleur's and my datasets)
Not sure if this is a problem with the script content or its deployment so assigning @christinabergmann and @erikriverson. Let me know if I can help
The text was updated successfully, but these errors were encountered: