No.
For example, if p = 0.03, the probability of a type I error, assuming the null hypothesis is true, is 3%. The p-value does not tell you if a result was due to chance. At least, that's how I've learned it (though perhaps, I could be the one misunderstanding it!).
Like
@vokey588 and
@operaman state, though, p-values are not that great (especially in the context of how commonly people, including myself, misunderstand them). Confidence intervals are excellent and, like operaman, I hate seeing results without confidence intervals.
While I definitely agree with what you're getting at, we can probably say the same thing about of a lot of stuff that we learn in medicine. A common one I see on the wards, for example, is regarding COPD and the "hypoxic respiratory drive." It's inaccurate, since the change in PaCO2 levels we see in patients receiving O2 supplementation during a COPD exacerbation is, in reality, due to V/Q mismatch and the Haldane effect. But approaching it from the "hypoxic ventilatory drive" point of view doesn't drastically affect clinical management either.
Here's a quote I like from
Evidence-Based Diagnosis regarding
understanding p-values:
The general idea is that I really like understanding the mechanisms behind things. It makes it easier for me to learn the material and retain that information. It's more, like the quote puts it, a satisfaction issue. Like I said though, I think you're completely right that we can do just fine with a rough idea of what a p-value is, even if it's not entirely accurate. So, I've got nothing against that approach. It doesn't significantly affect practical aspects of clinical medicine.
I both agree and disagree. I agree that it's hard to find someone who can teach the subject well and teach us how it applies to clinical medicine. Part of it, I think does have to do with throwing those terms (Gaussian, null, alpha, beta, etc) at students too fast to digest. With that being said, I still think that schools should go out of their way to find a good teacher who can teach this topic well to med students. In my personal opinion, the only subjects in med school possibly more important than statistics and study design are probably physiology and pathology/pathophysiology. It sucks only getting 1 or a few lectures on statistics.
I would prefer a journal club type situation where, during the course of the year, you go through some landmark papers while learning about statistics and study design. I think most students will be better able to understand and appreciate the information if it's presented slowly rather than everything tossed together into 1 lecture. I don't know if that makes sense or if I'm just rambling now.
My institution has just such a program, though we had to apply for it and only a few people per year are selected (it funds a full year of research as well). The professor who leads the journal club-style seminars is a clinical research methodology guru and personally curated a set of 20 papers that give an excellent overview of fundamental statistics, tricks study authors use to make their results seem more impressive, special types of trials (non-inferiority, factorial), stopping rules for trials, creating good composite outcomes, selective reporting of outcomes, and several other topics. I pasted the list of references below. If you could read only one, I'd suggest the "HARLOT plc" paper by Sackett et al. It's satirical but lays out many of the big ways trial authors will try to trick you. If you want to read them all, I'd start from the bottom and go up.
References
[1] Dekkers Olaf M., Egger Matthias, Altman Douglas G., Vandenbroucke
Jan P.. Distinguishing case series from cohort studies. Annals of
internal medicine. 2012;156:37–40.
[2] Chan An-Wen W., Hróbjartsson Asbjørn, Haahr Mette T., Gøtzsche
Peter C., Altman Douglas G.. Empirical evidence for selective
reporting of outcomes in randomized trials: comparison of protocols to
published articles. JAMA. 2004;291:2457–2465.
[3] Mathieu Sylvain, Boutron Isabelle, Moher David, Altman Douglas G.,
Ravaud Philippe. Comparison of registered and published primary
outcomes in randomized controlled trials. JAMA. 2009;302:977–984.
[4] Goodman Steven N.. Stopping at nothing? Some dilemmas of data
monitoring in clinical trials. Annals of internal medicine.
2007;146:882–887.
[5] D’Agostino Ralph B., D’Agostino Ralph B.. Estimating treatment
effects using observational data. JAMA. 2007;297:314–316.
[6] Mueller Paul S., Montori Victor M., Bassler Dirk, Koenig Barbara A.,
Guyatt Gordon H.. Ethical issues in stopping randomized trials early
because of apparent benefit. Annals of internal medicine.
2007;146:878–881.
[7] Foster E. Michael. Propensity score matching: an illustrative analysis
of dose response. Medical care. 2003;41:1183–1192.
[8] Sackett David L., Oxman Andrew D., HARLOT plc . HARLOT plc:
an amalgamation of the world’s two oldest professions. BMJ (Clinical
research ed.). 2003;327:1442–1445.
[9] Morton Veronica, Torgerson David J.. Effect of regression to the mean
on decision making in health care. BMJ (Clinical research ed.).
2003;326:1083–1084.
[10] Freemantle Nick, Calvert Melanie, Wood John, Eastaugh Joanne,
Griffin Carl. Composite outcomes in randomized trials: greater
precision but with greater uncertainty? JAMA. 2003;289:2554–2559.
[11] Kaul Sanjay, Diamond George A.. Good enough: a primer on the
analysis and interpretation of noninferiority trials. Annals of internal
medicine. 2006;145:62–69.
[12] Spruance Spotswood L., Reid Julia E., Grace Michael, Samore
Matthew. Hazard ratio in clinical trials. Antimicrobial agents and
chemotherapy. 2004;48:2787–2792.
[13] McAlister Finlay A., Straus Sharon E., Sackett David L., Altman
Douglas G.. Analysis and reporting of factorial trials: a systematic
review. JAMA. 2003;289:2545–2553.
[14] Zhang J., Yu K. F.. What’s the relative risk? A method of correcting
the odds ratio in cohort studies of common outcomes. JAMA.
1998;280:1690–1691.
[15] Katz Mitchell H.. Multivariable analysis: a primer for readers of
medical research. Annals of internal medicine. 2003;138:644–650.
[16] Sterne J. A., Davey Smith G.. Sifting the evidence-what’s wrong with
significance tests? BMJ (Clinical research ed.). 2001;322:226–231.
[17] Guyatt G., Walter S., Shannon H., Cook D., Jaeschke R., Heddle N..
Basic statistics for clinicians: 4. Correlation and regression. CMAJ :
Canadian Medical Association journal = journal de l’Association
medicale canadienne. 1995;152:497–504.
[18] Jaeschke R., Guyatt G., Shannon H., Walter S., Cook D., Heddle N..
Basic statistics for clinicians: 3. Assessing the effects of treatment:
measures of association. CMAJ : Canadian Medical Association journal
= journal de l’Association medicale canadienne. 1995;152:351–357.
[19] Guyatt G., Jaeschke R., Heddle N., Cook D., Shannon H., Walter S..
Basic statistics for clinicians: 2. Interpreting study results: confidence
intervals. CMAJ : Canadian Medical Association journal = journal de
l’Association medicale canadienne. 1995;152:169–173.
[20] Guyatt G., Jaeschke R., Heddle N., Cook D., Shannon H., Walter S..
Basic statistics for clinicians: 1. Hypothesis testing. CMAJ : Canadian
Medical Association journal = journal de l’Association medicale
canadienne. 1995;152:27–32.