Well, it's ACFAS fellows, meaning people who are already board certified by ABFAS and already well along in their career. Also probably an overrepresentation of hospital employed DPMs.
Speaking of selection bias, I have a really meta question for
@sdupre_apma : do you think survey-takers are a representative sample of any given income survey? In other words, you have a sub-population of DPMs (or plumbers or barbers or statisticians etc) that is more conscientious than the general population, and because of that they self-select into filling out the survey. But because they're more conscientious than their peers, this also translates into career success and higher income. So they upwardly bias the survey outcome in the process. Is this a significant effect?
@Adam Smasher I'm going to try to respond to this tonight, but I want to read through the nuance of the posts which followed first. There is some fascinating discussion later in this thread.
The short answer is, abso-100%-lutely. The single hardest part of designing a survey and shepherding it through the enumeration process is in reducing the exact form of bias (among other biases) to which you refer.
Some people are more likely to respond than others, it's absolutely true. The key part in survey design is trying to figure out how the factors which might make someone more or less likely to respond to a survey are things that would be correlated (or in some way statistically associated) with your metric(s) of interest. So, would the things which might lead a podiatrist to be more successful at achieving higher compensation be associated with increased or reduced likelihood of response? YES. They might be earlier in their career, swamped, and with less free time to respond. They might be late in their career, be less good at time management, billing, or any of innumerable important soft skills and thus both earning less and less likely to remember to respond (or have time to do so). As people in these conversations mentioned, they might be very successful in their careers and happy to come on here and "brag" as some have asserted.
So, does this make results bad? No, not necessarily. Can it throw off results significantly, sure.
What does this mean for us? We need to do one or more of the following:
(1) measure deviation from the expected general population you're assessing and...
(2) either account-for that deviation and associated weaknesses by weighting results or by reporting results split by the various subpopulations within or...
(3) simply report potential biases and let people make their own educated assertions based on those transparently reported weaknesses
(4) during enumeration, adjust sampling strategy and perform targeted outreach as you see gaps in response occurring
(5) post-enumeration, assess those gaps and do further targeted surveys at populations which might have had reduced response before reporting results
(6) select statistical methods for imputing/bootstrapping/etc. those missing population data based on existing understandings of those populations. There are VERY rigorous and well-supported methods for doing so. In my past work for example, many countries don't acknowledge Kosovo as being an independent nation. As such, many of my component indicators for countries in Eastern Europe didn't have data for Kosovo, so I had to build a rigorous set of imputation models to estimate figures for Kosovo for those countries, based on the relationships between the countries for which we DID have data along with the relationships for those countries and the metrics where Kosovo was tracked.
(7) use statistical methods for our conclusions that themselves account for non-randomness in missing data.
... and looks like I went on a roll and answered more than I planned. Either way the short answer to your meta-question is that you're 100% right, but it's not a problem if you know what you're doing or are at least honest about the flaws in your data.
Warmly,
Sam