Today in Detecting Bad Science: Replication Failure. When independent researchers repeat a study, the results may be quite different. The hallmark of reliable knowledge is a successful independent replication. If another team of researchers repeat the study and find results in the same direction, with a similar effect size, you can be confident that the original result is a robust and generalizable finding.
https://detectingbadscience.wordpress.com/2024/11/03/replication-failure/
[#]replication #betterscience #reliability #science
=> More informations about this toot | More toots from renebekkers@mastodon.social
@renebekkers Seems like a problem that will never cease, as long as academia continues to use overpriced black boxes with firmware versions and hardware issues that are completely unknown to the scientists using them
=> More informations about this toot | More toots from ekis@mastodon.social
@renebekkers I think it's fair to say that when independent researchers repeat a study, generally speaking the results will be quite different. I'm always shocked when results replicate in independent datasets.
But I do a lot of meta-analyses, and that's what those are for: looking for consensus across experiments, not replication. Meta-analyses also allow for exploring experimental factors that can change the result.
Replication's great, but don't freak out if it's not 100%.
[#]science
=> More informations about this toot | More toots from chiasm@mastodon.online
@chiasm agreed! A tough problem in science that meta analyses tend to aggravate is publication bias. https://detectingbadscience.wordpress.com/2024/09/29/publication-bias/
By averaging over studies that are more positive than all findings obtained (but not published), meta analyses only reinforce overly positive impressions of effect sizes. Pre-registered replications tend to reveal that actual effects are only half as strong as the original studies
=> More informations about this toot | More toots from renebekkers@mastodon.social
@renebekkers Oh pre-registration is definitely the way to go! I actually was doing prospective meta-analyses: Collecting legacy datasets, and processing/doing the same analyses on each, and then doing the meta-analysis. At least in my field, it's eye-opening to see THE best known result in the literature replicate maybe 10/15 times, while the lesser results are only seen in a meta-analysis; the results are consistent, but not "significant" in each dataset, so they'd not have been published.
=> More informations about this toot | More toots from chiasm@mastodon.online
@chiasm if the analysis is the same for each dataset, why not pool all data and conduct one mega-analysis? Example: https://doi.org/10.1007/s10433-022-00691-5
=> More informations about this toot | More toots from renebekkers@mastodon.social
@renebekkers Because each dataset has their own data collection variations and meta-analysis allows for that.
=> More informations about this toot | More toots from chiasm@mastodon.online
@renebekkers Also in some cases the data can't be shared, so we collaborate with the original investigators to run the analysis we want and send the results.
=> More informations about this toot | More toots from chiasm@mastodon.online
@chiasm ah of course - with open data being the new normal I had forgotten about that
=> More informations about this toot | More toots from renebekkers@mastodon.social This content has been proxied by September (3851b).Proxy Information
text/gemini