I'm not sure if this is what you're looking for, but the classic article on problems with statistical significance is probably Jacob Cohen's 94 paper. I forget the title, but it was in American Psychologist.
Its VERY general (he's a quant guy). Its been awhile since I've read it, but I don't remember it "defining" clinical significance (I'm not sure there is a true definition). Clinical significance is kind of what you make of it. Loosely, if Cohen's D is less than a medium effect, its usually not "clinically meaningful". However that's kind of an absurd statement for me to make, since it really depends on the research question, etc. Like I mentioned in the other thread, if something can be implemented on a wide level, it can be very meaningful.
As an example, there's epidemiological evidence (meaning statistical significance) that taking aspirin can decrease risk of heart attack. This may mean that you only decrease the risk of a heart attack by a tenth of one percent, but the dataset is large enough to be statistically significant. Its not clinically significant in the sense that if someone walks into the ER complaining of chest pain, you don't toss them a bottle of aspirin and call it a day. However, it IS "important" in the sense that if a million people start an aspirin regimen, then that means saving 1000 of them. That's a huge oversimplification, and Cohen obviously does it a lot better than I can, but hopefully it helps. Unfortunately, there's no set guidelines for these sorts of things. Just rough estimates of what effect sizes mean that are context dependent, and its up to the reader to interpret as they will.
As for the paper, it does give a good background in the problems with pure significance testing and the importance of effect size though, which really amounts to the same thing. Its also written at a pretty understandable level for most psychologists, unlike many quant papers