A new study has called for scientists to adopt a clearer and more consistent approach when reporting the significance of their findings, in an effort to tackle ongoing concerns about research reliability. The research focuses on the idea of the “minimally meaningful effect size”, which refers to the smallest result that would be considered practically significant in the context of a study. The findings were published in Psychreg Journal of Psychology.
The authors argue that specifying this measure in advance could prevent questionable research practices that distort the true importance of results. In particular, they highlight a problem they call “HIMEing”, where researchers change their interpretation of results after seeing the data, sometimes overstating or downplaying the significance of an effect. Such practices can mislead readers, policy-makers, and other scientists about the real-world value of research findings.
Pre-registration, where scientists record their study design and analysis plans before collecting data, has been promoted as a way to improve transparency. While widely adopted in recent years, pre-registration often leaves out details about how effect sizes will be interpreted. This omission can leave the door open for flexible interpretations, which the authors say undermines scientific rigour.
By including a predetermined minimally meaningful effect size in pre-registration documents, researchers can set a clear threshold for when a result is worth noting. This, they argue, helps to ensure that significance is judged not only on statistical grounds but also on practical relevance. The measure should be tailored to each study, taking into account its design, context, and intended applications, rather than relying on generic benchmarks.
The paper outlines how such thresholds can be established, recommending that researchers consider previous studies, the specific context of their own work, and the real-world implications of any observed effect. In some cases, they suggest using unstandardised measures such as changes in currency amounts or symptom counts, as these are often easier to interpret than statistical units like standard deviations.
An example provided in the study illustrates how even small shifts in perception can be meaningful. In research on the “illusory truth effect”, which is the tendency to believe information more when it is repeated, a modest increase in truth ratings could influence whether people accept or reject certain statements. Such changes, though numerically small, may have significant social or policy consequences.
The authors acknowledge that introducing these thresholds will not be straightforward, particularly for those new to pre-registration or unfamiliar with the concept of effect sizes. However, they believe the benefits, which include greater transparency, better interpretation of results, and reduced publication bias, outweigh the challenges. They stress that studies producing results below the threshold still have value, as they can guide future research and refine understanding of a topic.
The call for integrating the minimally meaningful effect size into pre-registration reflects a broader movement towards more open and accountable science. With replication concerns continuing to affect trust in research, the authors say that setting clear interpretive standards is a vital step in ensuring that scientific findings are both credible and useful.