Probably not the best one to ask as I had to google it to even jog my memory what it was!
I'm not totally ignorant of the area, but my stats knowledge is more limited to experimental research. I have yet to do any formal measurement development. I'd love to get that in a post-doc, though the more I learn the more I realize I could spend the rest of my life hopping from post-doc to post-doc and still not learn everything I want to (right now - I want to learn clinical trial methodology, chaos theory/non-linear analysis, pure quant, and computational neuroscience...in addition to that whole addiction/health psych stuff I'm supposed to actually ya know...be focusing on). As fun as that sounds, I should probably get a grown-up job at some point.
I glanced at the wiki (terrible I know, but the technique is basic enough to get the gist just from the formula). I can see some utility in it, though I imagine it would have far more utility in bench sciences, where the error variance is generally much smaller and not expected the way it is for our work. I'm not sure how much it would get you that a careful glance at a scatterplot wouldn't, since unless I missed something in the math, the information you'd get from it seems like it would roughly equate to homogeneity of variance - just "centered" so perhaps a bit easier to see visually.
I think multi-method approaches are the way to go, and something psychology could stand to use a lot more of. Its semi-popular in measurement but nowhere else. My view is that if its a "real" effect you should be able to analyze the data 10 different ways and get the same answer. My experience suggests that this is almost never the case, which of course opens the door for people to just analyze things 10 different (perfectly justifiable) ways and pick their favorite outcome.