3. It must be understood that the basic nature of science is fluid, and that in any given year, exam items may be added and not specifically referenced in the study guide. The number of such items would be limited.
This last point is unacceptable, in my opinion. The scope is vague and it gives them an out for allowing creep of inappropriate questions in the exam. People shouldn't be failing exams of "minimal competence" because they got 1 or 2 new unreferenced questions wrong.
It should be the burden of the test makers to ensure that test questions can be answered from study resources. If the primary study resources are determined to be out of date and it is deemed imperative for a test of "minimal competence" to include a contemporaneous novel question, then at least the secondary resource list should be updated prior to the exam. In other words, the test content should lag the resource list, not lead it. An honest review of test content would ensure that 100% of questions can be answered definitively from reference materials.
So, no I'm sorry. I disagree with the ABR's assertion that "it must be understood." Says who? That's a rather bold thing to say. No, that's not understood. Sure, "science is fluid," but platitudes like that don't justify adding questions to an exam testing minimal competence that are not found in the most recent revisions of multiple comprehensive seminal textbooks in the field. And they undermine their own argument by going on the defensive that "the number of items would be limited." If it's truly about science being fluid, then so what if 100% of the questions are not in Hall or Joiner? Why is it ok for 5% to be out there, but not ok for 100%? Why are they defending that a few unreferenced questions are ok in a high stakes exam like this? That number should be exactly zero. Furthermore, what defines "limited"? 5%? 10%? 25%? Can't they see the problem with this subjectivity?
I am disappointed that they were not willing to implement a hard policy against including questions that cannot be answered from the complete resource list. If they want to add a question that is outside of the scope of Hall and Joiner, fine. I would argue that for an exam of "minimal competence," THREE primary textbooks should well assess for this, but regardless, fine -- if they really want to add a new question, then they should update the secondary resource list with a specific citation to cover it. And efforts should be made to keep the volume of this list reasonable and exclude redundant references.
This is a decent start, but my recommendation to representatives of the 4 groups this letter was sent to would be to strongly object to the language used in point #3 and push the ABR to implement a policy that has them review every question and ensure that it can be answered from listed resources and if not, either throw it out or update the reference list. At the very minimum make it a non-scored experimental question.
The second point I would like to make is that none of this addresses the primary problem with last year's test, which was the absurd difficulty of large numbers of the questions, which any reasonable person would be able to identify as well beyond "minimal competence" for safe clinical practice. It is still possible to produce a test full of irrelevant minutiae, which in some cases essentially boiled down to trivia like simply not getting the letters and numbers in three-letter-acronym combinations mixed up, all of which can technically be found in Hall or Joiner. The ABR has yet to comment or offer concessions on the problem with inappropriate question difficulty other than offering explanations like "clinicians approved of the clinical relevance of all questions" (paraphrased) despite overwhelming feedback suggesting strong disagreement with this assertion in the setting of a statistically improbable increase in failure rates.
At the end of the day, nobody is policing the ABR, they are accountable to no one, they essentially have a monopoly in the board certification business, and they may just say, "why is it our responsibility to tell the students and instructors what we are going to test? You should be able to figure out what's important yourself." And we can't do anything about that. But if they want to engender faith in the system, prove they are not taking advantage of their monopoly status, and demonstrate value of the board examination process, a good faith effort to communicate with residency programs exactly what they define as minimal competence is necessary.