Statistical testing in medicine is a controversial and commonly misunderstood topic. Despite decades of efforts by renowned associations and international experts, fallacies such as nullism, the magnitude fallacy, and dichotomania are still widespread within clinical and epidemiological research. This can lead to serious health errors (e.g., misidentification of adverse reactions). In this regard, our work sheds light on another common interpretive and cognitive error: the fallacy of high significance, understood as the mistaken tendency to prioritize findings that lead to low p-values. Indeed, there are target hypotheses (e.g., a hazard ratio of 0.10) for which a high p-value is an optimal and desirable outcome. Accordingly, we propose a novel method that goes beyond mere null hypothesis testing by assessing the statistical surprise of the experimental result compared to the prediction of several target assumptions. Additionally, we formalize the concept of interval hypotheses based on prior information about costs, risks, and benefits for the stakeholders (NORD-h protocol). The incompatibility graph (or surprisal graph) is adopted in this context. Finally, we discuss the epistemic necessity for a descriptive, (quasi) unconditional approach in statistics, which is essential to draw valid conclusions about the consistency of data with all relevant possibilities, including study limitations. Given these considerations, this new protocol has the potential to significantly impact the production of reliable evidence in public health.