Physics & Radbio

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.
Invoking professionalism is one of those overused, often manipulative things to tinge negatively a view you disagree with, “stop that, be professional”. i wish we could take back that word to mean doing the right thing, being honourable, but the wrong people have taken that word. When i hear it i flinch a bit with disgust inside me

Members don't see this ad.
 
  • Like
Reactions: 1 user
Please don’t even bring up professionalism. The ABR and Wallner lack all professionalism and responsibility to our field. I agree with continued pursuit of litigation. This is so ridiculous, and the excuses they have given are pitiful.
 
  • Like
Reactions: 1 user
Members don't see this ad :)
In case anyone had any doubt what the purpose of this year's exam was:
You're in a small program? We don't care about you and want to see you fail. You don't belong in this field. It's absurd that failures are punished by being forced to retake the exams alongside the clinical, increasing the chances of failing all 3. Why not try and help those who failed pass rather than making it even more difficult for them the next time? Wonder how things would be different if it were the large programs that had most of the failures? Isn't it funny how they published a biased opinion about small programs before this year's exam was even administered? What a coincidence.

TO: Association of Residents in Radiation Oncology (ARRO) Executive Committee American College of Radiation Oncology (ACRO) Resident Committee FROM: Valerie P. Jackson, MD, Executive Director Brent Wagner, MD, President

DATE: October 5, 2018

SUBJECT: Response to September 26, 2018 letter of concern

We appreciate the thoughtful comments you expressed in your cover note and open letter dated September 26, 2018. We understand the frustration from a small group of candidates who did not perform as well as intended on one or both of the recent ABR radiation oncology basic science qualifying exams. Below are responses to those concerns, as well as a few observations from the ABR perspective. You indicate that angry residents have reached out to you anonymously and non-anonymously. We have also been the recipient of similar communications, many of which demonstrate a significant misunderstanding of the ABR exam development, standard-setting and scoring processes, the fundamentals of various organizational responsibilities, and the essence of the process by which the determination of the knowledge and skill set expected of trainees in radiation oncology is made. ABR volunteer leaders will be available to provide a detailed response to your communication at the upcoming ARRO and ADROP meetings in San Antonio. We believe that some general observations and clarification of a number of misconceptions are appropriate at this time. Regarding ABR transparency with regard to its processes, policies, and results – we make concerted effort to share information that is useful to both candidates and programs while taking appropriate actions to safeguard the exam content. Pass rates for all exams are routinely provided to department chairs and program directors, and have been posted online. In 2016-2017, a change in web-based exam results reporting was established for what was thought to be an improvement in understanding. The ABR recognizes that for initial certification (IC), aggregate reporting may be less informative. Thus, we will be returning to our previous practice of annual posting. Candidates are provided with quartile scores, rather than raw scores, because quartile positions more readily permit assessing the performance of the individual in comparison to the peer group. The logistics of the criterion-referenced standard-setting method (Angoff) have been widely described in ABR publications and in a host of academic peer-reviewed journals and texts. The Angoff method is employed by the majority of American Board of Medical Specialties (ABMS) member boards and is considered a best-practice for this type of professional assessment instrument. The Angoff method has been found to be highly reliable, reproducible, and valid. The ABR has tracked the validity of its own use of the Angoff standard-setting system and has never had deviations from discriminatory norms. The statistical analysis that is performed tracks year-over-year performance by all candidates, which is helpful from a historical perspective and assists department chairs and program directors in assessing their programs and trainees. While informative, this analysis does not take into account factors related to individual exam questions, or the many variables associated with variation in candidates, training programs, and importantly, addition of new material and deletion of outdated material as clinical care and basic science advance. The rigorous and routine ABR psychometric analyses focus on reliability, difficulty, and discriminatory accuracy of individual questions, and in that regard, the performance of the exams this year was well within metric reliability. As we have indicated previously, the exam development and implementation process have remained essentially unchanged for many years, including development by many of the same individuals. After the exam is administered, each candidate’s exam response data is reviewed to ensure that his or her data is complete and accurately recorded. The number of responses is confirmed as the correct number for the exam that the candidate took. An initial scoring is completed, and all scores are reviewed. Then, each question on each exam is reviewed statistically. Any question that does not perform as expected is sent to the appropriate committee for review. The committee determines whether the keyed answer is truly correct and that there are not other provided answer options that could be confusing. If the committee decides that the keyed answer is incorrect, or confusing, they may remove the question from the exam and the scoring process. Scores are then recalculated and checked again for accuracy before posting to myABR. Your open letter also references a lack of change in didactic education, available study materials or in-service exam scores. The ABR has no direct means of evaluating quantity or quality of didactic education, as this is the role of the Accreditation Council for Graduate Medical Education (ACGME). However, careful analysis of performance by program size, which will be presented in greater detail at the upcoming meetings, suggests a direct relationship between program size (as one possible surrogate for didactic education) and exam performance. In both the physics and radiation and cancer biology exams, candidates training in programs of 6 or fewer candidates had a remarkable difference in pass/fail rates when compared to their peers who trained in larger programs. These differences were further magnified by the fact that 61% (55 of 90) of the programs reviewed had 6 or fewer trainees, and in the current exams, 46% of the peer group (100 of 217) were trained in those small programs. A majority of candidates who failed a basic science exam failed both exams, including a significant number of candidates who had failed the exam(s) previously. These findings raise concern regarding exam preparation. With regard to the performance on inservice exams, there is no valid basis to compare performance in those assessment tools as compared to the ABR exams, which are developed by different people, for different purposes, and, in comparison to the ABR exams, are subjected to no psychometric controls, validation or review. The radiation oncology study guides provided by the ABR are developed to offer guidance only as to topics which might be included in exams. A recent review of those documents indicates that these guides provide that information. The basic sciences in radiation oncology represent dynamic domains, with constant addition of new material. These items are included in the study guide and, as such, it is incumbent upon residency training programs to prepare trainees for these new ideas, terms and concepts. Comparison to materials provided for our diagnostic radiology (DR) colleagues is not appropriate; the domains assessed in DR exams include dozens of imaging modalities, hundreds of normal and pathologic entities, and thousands of imaging variations, with a primary assessment of a bi-modal correct or incorrect diagnostic decision. The provision of greater detail on distribution of potential material was essential because of the enormity of potential material and the introduction of an entirely new DR core (qualifying) examination several years ago. The basic distribution of radiation oncology exam material has generally followed the previously published tri-annual clinical practice analysis (CPA) survey. The CPA has directly informed exam development in such specific ways as a reduction in pediatrics and brachytherapy content, based on declines in those practices by radiation oncologists in the field. Your letter also refers to “standard” texts which have been basic resources for radiation oncology trainees for generations. Regrettably, a significant number of active cancer scientists agree that those texts are outdated. The ABR is committed to working with our volunteers to provide more updated reference sources for trainees and educators. We agree that a lack of specialty-wide, consensus-driven curricula in physics and radiation and cancer biology is problematic, leading to remarkably heterogeneous teaching and preparation. However, curriculum development is outside the scope of the ABR’s mission: this activity is more appropriately managed by the ACGME Radiation Oncology Review Committee (RO RC) and various stakeholder specialty organizations. We have encouraged those stakeholders to update the previously developed physics curriculum, and to develop for the first time, a radiation and cancer biology standardized curriculum. Curriculum development should be associated with a greater attempt to provide homogeneous levels of basic science education to trainees. In conclusion, the ABR stands by the reliability and supportability of its exams. We will continue to work with chairs, program directors, basic science educators and stakeholder organizations to better prepare candidates for the certification process.
 
WOW!!! The ABR HAS SHOWN THEIR CARDS. They are in the process of launching a coordinated attack on “smaller programs”, almost 50 percent of the programs, the programs that the “big names” have not expanded to 12 a year. Do not sleep on this folks or you will be caught with your pants down. they blame the ACGME stating that the ACGME should take care of this, knowing well that the same crooked folks are in charge and will then turn around and try to close down large amount of programs so they can keep their large programs and keep expanding for cheap labour and “academic” satellites.
 
So if you're well connected to someone involved with the ABR, you can become a radiation oncologist. That's all it says. There is absolutely nothing wrong with that. That is how radiation oncology was built. My suggestion to anyone who failed is to find someone involved with the ABR exam process and give them a bribe. Like they do in Zimbabwe.


Again, you are entitled to know if they used the Angoff method appropriately. Rumor has it that it wasn't done right and the exam ended up being an MSKCC exam in physics and a Stanford exam in radiation biology.


ABR Psychometric Testing: Analysis of Validity and Effects. - PubMed - NCBI
This article examines the validity and consequences of the ABR psychometric testing process, and we conclude that its validity can be challenged and that negative consequences, including adverse effects on allocating human and financial resources and on what is taught and learned in residency programs, should be addressed. The ABR could collaborate with the ACGME, education experts, patients, and public representatives to reform their testing processes,

Also, the ABR does not like criticism. At all.

Redirecting


In case anyone had any doubt what the purpose of this year's exam was:
You're in a small program? We don't care about you and want to see you fail. You don't belong in this field. It's absurd that failures are punished by being forced to retake the exams alongside the clinical, increasing the chances of failing all 3. Why not try and help those who failed pass rather than making it even more difficult for them the next time? Wonder how things would be different if it were the large programs that had most of the failures? Isn't it funny how they published a biased opinion about small programs before this year's exam was even administered? What a coincidence.

TO: Association of Residents in Radiation Oncology (ARRO) Executive Committee American College of Radiation Oncology (ACRO) Resident Committee FROM: Valerie P. Jackson, MD, Executive Director Brent Wagner, MD, President

DATE: October 5, 2018

SUBJECT: Response to September 26, 2018 letter of concern

We appreciate the thoughtful comments you expressed in your cover note and open letter dated September 26, 2018. We understand the frustration from a small group of candidates who did not perform as well as intended on one or both of the recent ABR radiation oncology basic science qualifying exams. Below are responses to those concerns, as well as a few observations from the ABR perspective. You indicate that angry residents have reached out to you anonymously and non-anonymously. We have also been the recipient of similar communications, many of which demonstrate a significant misunderstanding of the ABR exam development, standard-setting and scoring processes, the fundamentals of various organizational responsibilities, and the essence of the process by which the determination of the knowledge and skill set expected of trainees in radiation oncology is made. ABR volunteer leaders will be available to provide a detailed response to your communication at the upcoming ARRO and ADROP meetings in San Antonio. We believe that some general observations and clarification of a number of misconceptions are appropriate at this time. Regarding ABR transparency with regard to its processes, policies, and results – we make concerted effort to share information that is useful to both candidates and programs while taking appropriate actions to safeguard the exam content. Pass rates for all exams are routinely provided to department chairs and program directors, and have been posted online. In 2016-2017, a change in web-based exam results reporting was established for what was thought to be an improvement in understanding. The ABR recognizes that for initial certification (IC), aggregate reporting may be less informative. Thus, we will be returning to our previous practice of annual posting. Candidates are provided with quartile scores, rather than raw scores, because quartile positions more readily permit assessing the performance of the individual in comparison to the peer group. The logistics of the criterion-referenced standard-setting method (Angoff) have been widely described in ABR publications and in a host of academic peer-reviewed journals and texts. The Angoff method is employed by the majority of American Board of Medical Specialties (ABMS) member boards and is considered a best-practice for this type of professional assessment instrument. The Angoff method has been found to be highly reliable, reproducible, and valid. The ABR has tracked the validity of its own use of the Angoff standard-setting system and has never had deviations from discriminatory norms. The statistical analysis that is performed tracks year-over-year performance by all candidates, which is helpful from a historical perspective and assists department chairs and program directors in assessing their programs and trainees. While informative, this analysis does not take into account factors related to individual exam questions, or the many variables associated with variation in candidates, training programs, and importantly, addition of new material and deletion of outdated material as clinical care and basic science advance. The rigorous and routine ABR psychometric analyses focus on reliability, difficulty, and discriminatory accuracy of individual questions, and in that regard, the performance of the exams this year was well within metric reliability. As we have indicated previously, the exam development and implementation process have remained essentially unchanged for many years, including development by many of the same individuals. After the exam is administered, each candidate’s exam response data is reviewed to ensure that his or her data is complete and accurately recorded. The number of responses is confirmed as the correct number for the exam that the candidate took. An initial scoring is completed, and all scores are reviewed. Then, each question on each exam is reviewed statistically. Any question that does not perform as expected is sent to the appropriate committee for review. The committee determines whether the keyed answer is truly correct and that there are not other provided answer options that could be confusing. If the committee decides that the keyed answer is incorrect, or confusing, they may remove the question from the exam and the scoring process. Scores are then recalculated and checked again for accuracy before posting to myABR. Your open letter also references a lack of change in didactic education, available study materials or in-service exam scores. The ABR has no direct means of evaluating quantity or quality of didactic education, as this is the role of the Accreditation Council for Graduate Medical Education (ACGME). However, careful analysis of performance by program size, which will be presented in greater detail at the upcoming meetings, suggests a direct relationship between program size (as one possible surrogate for didactic education) and exam performance. In both the physics and radiation and cancer biology exams, candidates training in programs of 6 or fewer candidates had a remarkable difference in pass/fail rates when compared to their peers who trained in larger programs. These differences were further magnified by the fact that 61% (55 of 90) of the programs reviewed had 6 or fewer trainees, and in the current exams, 46% of the peer group (100 of 217) were trained in those small programs. A majority of candidates who failed a basic science exam failed both exams, including a significant number of candidates who had failed the exam(s) previously. These findings raise concern regarding exam preparation. With regard to the performance on inservice exams, there is no valid basis to compare performance in those assessment tools as compared to the ABR exams, which are developed by different people, for different purposes, and, in comparison to the ABR exams, are subjected to no psychometric controls, validation or review. The radiation oncology study guides provided by the ABR are developed to offer guidance only as to topics which might be included in exams. A recent review of those documents indicates that these guides provide that information. The basic sciences in radiation oncology represent dynamic domains, with constant addition of new material. These items are included in the study guide and, as such, it is incumbent upon residency training programs to prepare trainees for these new ideas, terms and concepts. Comparison to materials provided for our diagnostic radiology (DR) colleagues is not appropriate; the domains assessed in DR exams include dozens of imaging modalities, hundreds of normal and pathologic entities, and thousands of imaging variations, with a primary assessment of a bi-modal correct or incorrect diagnostic decision. The provision of greater detail on distribution of potential material was essential because of the enormity of potential material and the introduction of an entirely new DR core (qualifying) examination several years ago. The basic distribution of radiation oncology exam material has generally followed the previously published tri-annual clinical practice analysis (CPA) survey. The CPA has directly informed exam development in such specific ways as a reduction in pediatrics and brachytherapy content, based on declines in those practices by radiation oncologists in the field. Your letter also refers to “standard” texts which have been basic resources for radiation oncology trainees for generations. Regrettably, a significant number of active cancer scientists agree that those texts are outdated. The ABR is committed to working with our volunteers to provide more updated reference sources for trainees and educators. We agree that a lack of specialty-wide, consensus-driven curricula in physics and radiation and cancer biology is problematic, leading to remarkably heterogeneous teaching and preparation. However, curriculum development is outside the scope of the ABR’s mission: this activity is more appropriately managed by the ACGME Radiation Oncology Review Committee (RO RC) and various stakeholder specialty organizations. We have encouraged those stakeholders to update the previously developed physics curriculum, and to develop for the first time, a radiation and cancer biology standardized curriculum. Curriculum development should be associated with a greater attempt to provide homogeneous levels of basic science education to trainees. In conclusion, the ABR stands by the reliability and supportability of its exams. We will continue to work with chairs, program directors, basic science educators and stakeholder organizations to better prepare candidates for the certification process.
 
Last edited by a moderator:
A small amount of residents? Are you unable to do simple math, Wallner? Maybe we should make you retake these exams.

Lawsuit. Lawsuit. Lawsuit.
 
A small amount of residents? Are you unable to do simple math, Wallner? Maybe we should make you retake these exams.

Lawsuit. Lawsuit. Lawsuit.

Majority of old timers and older rad oncs would not even get an interview for our specialty in the competitive environment we applied in and would never get the chance to take the exam. These are the same people who feel comfortable lecturing us on the declining quality of residents. they then also make sure they never have to take an exam and “grandfathered” themselves into it
 
  • Like
Reactions: 1 users
The ABR has repeatedly tried to drive it home that clinicians were very involved in writing this exam. I wonder why they are they so defensive about this? Anybody who took it knows it was largely clinically irrelevant.

If you google "ABR bio committee 2018" you will see a group photo.

The only "clinician" I see is the chairman of MSKCC. I also see an author of the "out-dated" textbook the ABR has also repeatedly referenced, which coincidentally had a new edition come out right after the test was given this year. I wonder what the pass rate was at MSKCC?

I'm starting to lose track of all of these coincidences.
 
So....is anyone on this forum going to be at that ASTRO meeting? I won’t be there this year but I really want to know how mjch adrop is going to stick up for us? What the hell, how can we get rid of these dinosaurs? I agree with whoever said I care less about having to retake this exam now than I do about changing the leadership.
 
Wallner wouldn't even match at East Carolina's new program

guy wouldn't even sniff an interview tbh

low energy, low IQ from what I hear
 
So....is anyone on this forum going to be at that ASTRO meeting? I won’t be there this year but I really want to know how mjch adrop is going to stick up for us? What the hell, how can we get rid of these dinosaurs? I agree with whoever said I care less about having to retake this exam now than I do about changing the leadership.


I know for a fact that ADROP is SUPER up in arms about this. They're just as pissed as we are.
 
  • Like
Reactions: 1 user
Members don't see this ad :)
The ABR has repeatedly tried to drive it home that clinicians were very involved in writing this exam. I wonder why they are they so defensive about this? Anybody who took it knows it was largely clinically irrelevant.

If you google "ABR bio committee 2018" you will see a group photo.

The only "clinician" I see is the chairman of MSKCC. I also see an author of the "out-dated" textbook the ABR has also repeatedly referenced, which coincidentally had a new edition come out right after the test was given this year. I wonder what the pass rate was at MSKCC?

I'm starting to lose track of all of these coincidences.

Heard from the inside that none of the clinicians reviewed the basic science questions. Also heard even the basic science members were strongly against a majority of questions, but committee head (MSKCC and Standford) had the final say. Also heard a lot of that was intentional via Wallner with backing from above. Also heard Kachnik is getting played and letting her name get tarnished.
 
If they truly want to be as transparent as they claim, then they will report the failure rate from institutions with faculty involved with the exams vs those that are not. I suspect that there’s a dramatic difference, mirroring large vs small programs.

If so, then perhaps the reason for the historically low pass rate in radbio is simple. Some programs were more privy to the tested material than others. This needs to be disclosed. It has nothing to do with the size or “quality” of the program. It’s a terrible failure of communication, which should be the responsibility of the ABR/exam creators. The basic science exams aren’t difficult. Anyone can memorize the random minutiae - we all have been doing this and doing it well since undergrad. Just give us a clue as to what you want us to know. There’s a huge difference between understanding the material itself and trying to figure out what the scope of the content is. 100% of us are qualified to do the former and the latter should not be our responsibility (nor should we be penalized for failing to do so). The notion that bigger programs help you cram more random factoids in your head is silly and offensive. Especially since this hasn’t been an issue every year before this one (their response neglects that fact entirely).

ARRO, I (we?) would appreciate if you request this info and bring these points up to the ABR as well.


Sent from my iPhone using SDN mobile
 
  • Like
Reactions: 2 users
They need to do some analysis of their own and figure out which programs are cheating.

If everything they are saying is true, then the issue is cheating programs. (Directors tipping their hats to material/questions & saving/sharing recalls)
 
From the ABR letter to ARRO-
"However, careful analysis of performance by program size, which will be presented in greater detail at the upcoming meetings, suggests a direct relationship between program size (as one possible surrogate for didactic education) and exam performance."

This is a classic example of confirmation bias; "careful analysis" not in the least. It is possible that program size is a surrogate for quality but other hypotheses are plausible and there are likely several variables at play.

Will any of the PDs at the ADROP meeting have the temerity to ask whether the ABR has analyzed test results according to institutions represented on the exam writing panels? I would be keen to know.

I would also recommend someone ask whether clinicians vetted the final questions for MP and RCB. Get them on tape.
 
  • Like
Reactions: 1 user
Practicing physicians that see patients. At least 2 members do not see patients (one a full time scientist, the other an administrator). Questions should be vetted by clinicians that see patients.

FYI a lot of people involved in creating the exam are not happy about being used as pawns.



Will any of the PDs at the ADROP meeting have the temerity to ask whether the ABR has analyzed test results according to institutions represented on the exam writing panels? I would be keen to know.

I would also recommend someone ask whether clinicians vetted the final questions for MP and RCB. Get them on tape.
 
Last edited by a moderator:
ABR needs to offer a mid year retake exam so we can all move on from this. There is precident for this, its a computer based exam, and being 4 standard deviations away is scientific proof that the exams provided were way off base.

ABR you need to schedule a mid year retake so we can move on from this!! There is no admittance of any wrong doing by offering a retake, it is just responding to the reality of the situation. OFFER A RETAKE EXAM!!
 
  • Like
Reactions: 1 users
The biggest determinant of how things will play out is if the right people with power are upset AND they have the temerity to actually speak up, and demand change. if the meeting is just a bunch of deferance and socializing nothing will change. Somebody needs to let the ABR “leaders” have it. The damage they have done to their institution is irreparable in my view. The only path forward is a full leadership change, and a restructuring of boards. Otherwise create a rad onc board and stop being the Tiffany Trump for the ABR. Lets face it, Diagnostic radiology is their focus, DR will always be their Ivanka.
 
From the ABR letter to ARRO-
"However, careful analysis of performance by program size, which will be presented in greater detail at the upcoming meetings, suggests a direct relationship between program size (as one possible surrogate for didactic education) and exam performance."

This is a classic example of confirmation bias; "careful analysis" not in the least. It is possible that program size is a surrogate for quality but other hypotheses are plausible and there are likely several variables at play.

Will any of the PDs at the ADROP meeting have the temerity to ask whether the ABR has analyzed test results according to institutions represented on the exam writing panels? I would be keen to know.

I would also recommend someone ask whether clinicians vetted the final questions for MP and RCB. Get them on tape.

I can confirm that at least one "small program" that had a faculty member involved with test writing had a 100% failure rate, but the resident(s) was/were taken by surprise as there was no change in instruction or hint to account for increased content and difficulty in cancer biology. So at least one person was tight-lipped about the changes, although this was probably out of pure laziness/apathy. I would imagine they would use this to defend themselves if pushed given their propensity for manipulating or outright ignoring data and drawing illegitimate conclusions so far to agree with their preconceived notions. It would be interesting to see the results from large places like Stanford and Memorial.

Program directors and chairs need to push aggressively for a retake exam at ASTRO to put an end to all this. It is the simplest and most fair solution at this point. It is punitive to make residents retake the exam prior to the clinical and increases the chances of additional failures. Not a level playing field. Having to study for 3 exams concurrently while others only have to study for one? These exams should ALWAYS be given twice a year. The concept that the ABR can't do this because of financial considerations is laughable.

It is frustrating to see the ABR refuse to concede that this was an unfair exam, especially when their own comments undermine their argument that it was appropriate. They admit that study resources were out of date and there was a feeling that there should be an increased focus on cancer biology, yet the exam was changed before updates to both the Hall and Joiner textbooks, both of which came out this year. If they want to change the exam content and difficulty, fine, but they need to give fair warning to programs. The first step is getting the ABR to admit that test-takers this year were at a disadvantage compared to prior years due to change in test content without a corresponding change in study resources, and then agree to a fair compromise, which would be clarification on tested content and an re-examination so that everyone has a chance to equally prepare for the clinical. It is a simple solution that would put an end to all of this nonsense and stop people calling for the leadership's head on a stick. Refusal to compromise will just add further fuel to the idea that the exam was used to hurt small programs (which it obviously was to anybody paying attention) and that large programs were allowed an unfair advantage through a variety of means.
 
  • Like
Reactions: 1 user
This will not come out in favor of residents without litigation. PDs have no control over the ABR. Radiologist have no control over the ABR. The ACR has its own beef with the ABR but nothing has come of it. This would be no different, except for the huge fail rate. In 10 years I don't think this organization will be around, or if it is, it will not function in the same capacity. Doctors are already moving away from these organizations. We're just waiting for hospitals to do the same. Wallner will definitely not be around. Kachnic and these residents that failed will. It's important that we not lose track of the fact that the person making decisions is on his way out.
 
Suing the ABR is not going to result in an early retake exam this year. There will be no love lost here if you want to try and go after the ABR in court to try and collect damages from expenses and whatever else incurred. By all means file suit and generate headlines and make them go through the motions if they continue behaving like tyrants.

This has done serious harm to our image. Really, it's embarrassing. But it's not completely irreparable yet, and the best way to repair it would be to simply give another test. The ABR could do this and send a message that:

1. Allegations of using the exams trying to shut down small programs are false. Allegations that large programs are given an unfair advantage are false. We care about the small community programs that generate providers that fill a crucial need for oncologic care in small and rural communities. We are committed to helping these people succeed and ensuring they provide clinically competent care with tests that assess for such competence and not just competence .
2. We are not eating our young. If there is an anomaly with our tests, we are willing to offer a fair and equitable solution. We don't just give people a free pass, but we don't set them up for a repeat failure either.

Problem solved, everybody now knows they need to study cancer biology and physics much harder than in past, no more surprises next year, and we all move on from this.
 
Trying to limit my negativity, but before ADROP, ARRO, ASTRO or whoever is white knighted as a savior for this incident, recall that these are the organizations who have facilitated the residency expansion despite published data that we are training more rad oncs than society needs and a survey that over 1/2 the work force is concerned about oversupply. I still don't think they have published the survey.

If that is what is needed then those organizations will do what they do, but hard to see them as a position of benevolent power advocating for trainee rights when they facilitate trainee expansion in face of data saying these positions are not needed, and then rolling out fellowships on the backend.

Best to all. Know the devil you work with.
 
A retake is crucial. If the ABR doesn’t budge then a lawsuit is completely plausible. Anybody at these meetings - record conversations. Email discussions? Save them as well. Past remarks by our leadership on forums - screen shot. This is completely ridiculous and unnaceptable.
 
There has never been a failure rate this high for any board exam in medicine. The fact that some people are even having the multifactorial or small program discussion is ridiculous. This was a bad exam because someone wanted it to be.

I recommend one of these:
Click here to support Practicing Physicians of America organized by Westby G. Fisher

A retake is crucial. If the ABR doesn’t budge then a lawsuit is completely plausible. Anybody at these meetings - record conversations. Email discussions? Save them as well. Past remarks by our leadership on forums - screen shot. This is completely ridiculous and unnaceptable.
 
Last edited by a moderator:
Yeesh. They're really doubling down on this. Small programs that don't have internal access to the workings of the ABR are clearly to blame for a sudden significant drop in intelligence (in 1-year) that lead to these outrageously low pass rates.

I mean the levels of cognitive dissonance here is astounding. I hope ADROP lets them have it at the ASTRO meeting.
 
  • Like
Reactions: 1 user
According to the ABR, the "frustration" was from "a small group of candidates who did not perform as well as intended."

Their attempts to spin this are pitiful. Rather than reporting the true pass rates, they reported the pass rate for first-time exam takers only (70% and 74% -- I've heard the real overall numbers are 2-4% less). As if that extra 2-4% makes those numbers OK. Hey -- the pass rate on physics was 68% -- we'll just throw out the repeaters and call it 70% and that will sound just fine to people. Nothing to see here, move along...

"The majority of candidates passed." Mr. Wallner feels the need to mention this in all of his communications. Yes, technically >50% is "a majority. You've got me there. What can I say?

A small group of candidates? Why include that qualifier "small"? What defines small? Would you consider 65 candidates "small" (the most conservative estimate -- the highest estimate is 122) when this number is historically around 20? They then go on to explain that "A majority of candidates who failed a basic science exam failed both exams, including a significant number of candidates who had failed the exam(s) previously."

Again, non-specific language. No numbers. I'm sure you're bummed I caught this, ABR, because you totally had everyone else fooled.

This was a hit job on small programs from up above (essentially HALF of residencies by their own admission). No conspiracy hat needed -- they've as much as said so themselves. Their letter indicates they have already analyzed the data to support this conclusion and are ready to present their findings (with one hell of a serving of confirmation bias).
 
The ABR was willing to do 'careful analysis by program size' to find a direct correlation between small programs and failure rates, but they were NOT willing to re-evaluate scores on this exam when pass rates were 4-5 standard deviations lower than the past 15 years? If they were willing to evaluate anything.. why not see where the issue on the exam was? ~40% of the questions are from prior years.. was the class of 2019 3-5 standard deviations off on these questions as well, or just the new questions? That's just as easy to answer as going back and analyzing pass rates based on program size.

I am honestly shocked. I have been very careful to try and not unfairly cast the blame on the ABR or join in conspiracy theories. But I truly cannot find/think of a reason that the ABR wouldn't be willing to at LEAST look into this exam to see what went wrong and try to fix it (For the class of 2019 who is getting absolutely screwed here). Connecting the dots is looking more and more like they had an ulterior motive and are not even ashamed or trying to hide it.

ARRO did an amazing, professional, job with their letter to the ABR. I can only hope that the program directors reading this thread follow suit and don't take no for an answer. This has a huge ripple effect on the class of 2019 (and beyond). The odds are so heavily stacked against those that failed both exams and must now pass 3 exams next year. They are devoting unnecessary time to rad bio/physics minutiae instead of preparing for clinical practice. Very likely many of these individuals will fail one of the exams next year, which will put them a year behind on their oral boards. This is ridiculous and none of us should stop until the ABR does something about it-- whether by choice or by force.
 
there is no tinfoil hat conspiracy theory here. Read the ABR letter. They are showing you their cards that half the programs, "small programs" are not teaching well and are therefore to blame for this fiasco. Continuing to characterize this as a "conspiracy" is only playing into their hands and guarantees that nothing meaningful happens out of this. Wallner already told you that NO RESPONSE from you would add to the way he feels or change his mind. Good luck reasoning with our “leaders”— you better have some teeth or they will eat you alive.
 
Last edited:
  • Like
Reactions: 1 users
This has a huge ripple effect on the class of 2019 (and beyond). The odds are so heavily stacked against those that failed both exams and must now pass 3 exams next year. They are devoting unnecessary time to rad bio/physics minutiae instead of preparing for clinical practice. Very likely many of these individuals will fail one of the exams next year, which will put them a year behind on their oral boards.

This was Wallner's desired outcome. He wants to see the small programs first-time pass rate on all exams plummet and have them go on probation. Increasing the radbio and physics failures and forcing the retake before the clinical exam has exactly the ripple effect they wanted. They can explain it away all they want, but the truth is so clear. I know I am repeating myself, but it bears repeating and I want to make sure all of our peers are understanding the sequence of events as they transpired so they can draw the most appropriate conclusion.

This is really the smoking gun...

What Wallner et al wrote before the exam was given in the odd (at the time, now very clearly not odd) rebuttal to the Lee/Amdur article:
"most postgraduate training programs have six or fewer trainees and small faculties. In fact, most RO programs possess neither the resources nor the faculty depth and breadth described as part of the authors’ departments. One of us (PEW) served as a faculty advisor for the Association of Residents in Radiation Oncology (ARRO) for six years and became keenly aware of the lack of didactic programming and schooled educators in many of our training programs. Numerous faculty members in these small departments are committed almost full-time to clinical activities, with postgraduate trainee education seen as merely an adjunct to these clinical activities."

Up until this point, pass rates were consistently 90+% over the past DECADE across all programs. No evidence to support his assertion other than his "keen awareness." None. Zero. Evidence to the contrary (increasing USMLE scores). Then suddenly...

What Wallner et al wrote after the exam in response to being called out by ARRO:
However, careful analysis of performance by program size, which will be presented in greater detail at the upcoming meetings, suggests a direct relationship between program size (as one possible surrogate for didactic education) and exam performance. In both the physics and radiation and cancer biology exams, candidates training in programs of 6 or fewer candidates had a remarkable difference in pass/fail rates when compared to their peers who trained in larger programs. These differences were further magnified by the fact that 61% (55 of 90) of the programs reviewed had 6 or fewer trainees, and in the current exams, 46% of the peer group (100 of 217) were trained in those small programs.

He had the power to do whatever he wanted with the exam and the pass rates. To abuse this power would be such a rotten thing. Would somebody who did this agree to a re-evaluation of the exam and offer a re-take? No, of course not. That's what an honest person without an ulterior motive would do. An unethical person would simply lie, present misleading data, spit out logical fallacies/ad hominems, and deflect, even in the face of clear overwhelming evidence to the contrary. So what's happening here?

The saddest part is that he has it so backwards and is criticizing programs that put residents in a clinic a lot. You can be a competent radiation oncologist without being an 80/20 MD/PhD. In fact, probably more so. Although we know you don't want to hear that.
 
  • Like
Reactions: 2 users
The take home is that unless you can match into a top program, don't apply to radiation oncology.


This was Wallner's desired outcome. He wants to see the small programs first-time pass rate on all exams plummet and have them go on probation. Increasing the radbio and physics failures and forcing the retake before the clinical exam has exactly the ripple effect they wanted. They can explain it away all they want, but the truth is so clear. I know I am repeating myself, but it bears repeating and I want to make sure all of our peers are understanding the sequence of events as they transpired so they can draw the most appropriate conclusion.

This is really the smoking gun...

What Wallner et al wrote before the exam was given in the odd (at the time, now very clearly not odd) rebuttal to the Lee/Amdur article:
"most postgraduate training programs have six or fewer trainees and small faculties. In fact, most RO programs possess neither the resources nor the faculty depth and breadth described as part of the authors’ departments. One of us (PEW) served as a faculty advisor for the Association of Residents in Radiation Oncology (ARRO) for six years and became keenly aware of the lack of didactic programming and schooled educators in many of our training programs. Numerous faculty members in these small departments are committed almost full-time to clinical activities, with postgraduate trainee education seen as merely an adjunct to these clinical activities."

Up until this point, pass rates were consistently 90+% over the past DECADE across all programs. No evidence to support his assertion other than his "keen awareness." None. Zero. Evidence to the contrary (increasing USMLE scores). Then suddenly...

What Wallner et al wrote after the exam in response to being called out by ARRO:
However, careful analysis of performance by program size, which will be presented in greater detail at the upcoming meetings, suggests a direct relationship between program size (as one possible surrogate for didactic education) and exam performance. In both the physics and radiation and cancer biology exams, candidates training in programs of 6 or fewer candidates had a remarkable difference in pass/fail rates when compared to their peers who trained in larger programs. These differences were further magnified by the fact that 61% (55 of 90) of the programs reviewed had 6 or fewer trainees, and in the current exams, 46% of the peer group (100 of 217) were trained in those small programs.

He had the power to do whatever he wanted with the exam and the pass rates. To abuse this power would be such a rotten thing. Would somebody who did this agree to a re-evaluation of the exam and offer a re-take? No, of course not. That's what an honest person without an ulterior motive would do. An unethical person would simply lie, present misleading data, spit out logical fallacies/ad hominems, and deflect, even in the face of clear overwhelming evidence to the contrary. So what's happening here?

The saddest part is that he has it so backwards and is criticizing programs that put residents in a clinic a lot. You can be a competent radiation oncologist without being an 80/20 MD/PhD. In fact, probably more so. Although we know you don't want to hear that.
 
Whats your solution for the excess residency spots? The anger may be misplaced at Wallner, when it really should be at ASTRO leadership/chairmen who are fostering the residency expansion. Its unfair to target the smaller residencies, when it is the larger programs behind much of the expansion; but, some slots clearly have to close. Politically will be impossible to force places like mdacc to give up slots. Just trying to play devils advocate.
 
Last edited:
Whats your solution for the excess residency spots? The anger may be misplaced at Wallner, when it really should be at ASTRO leadership/chairmen who are fostering the residency expansion. Its unfair to target the smaller residencies, when it is the larger programs behind much of the expansion; but, some slots clearly have to close. Politically will be impossible to force places like mdacc to give up slots. Just trying to play devils advocate.

Fair point, but this is NOT the way to solve the residency expansion problem. The geographic maldistribution of radiation oncologists along major coastal metro areas is well documented in the literature. Trying to shut down the small programs in our country's interior in favor of training residents primarily instead at large (7+ residents according to Wallner) programs, which are mostly in big coastal cities is only going to worsen the maldistribution problem. And do you honestly think midwestern spots are just going to disappear and not be compensated for by an increased complement at the big guys, which will be placed at their gaggle of satellites? I'm struggling to understand Wallner's prejudice towards small midwestern and southern programs, but perhaps it is just as simple as he is a snobby coastal elite. People who relocate from New Jersey and New York to southwestern Florida typically aren't the most pleasant bunch. Especially towards young people.
 
Wow.

I initially did not buy the "agenda against small programs" theory at all. There's no way that could be true, I thought.

But after reading the ABR letter from ARRO, I was completely wrong. It's written in there clear as day. The agenda is extremely apparent.

That's horrifying.
 
Last edited:
This was Wallner's desired outcome. He wants to see the small programs first-time pass rate on all exams plummet and have them go on probation. Increasing the radbio and physics failures and forcing the retake before the clinical exam has exactly the ripple effect they wanted. They can explain it away all they want, but the truth is so clear. I know I am repeating myself, but it bears repeating and I want to make sure all of our peers are understanding the sequence of events as they transpired so they can draw the most appropriate conclusion.

This is really the smoking gun...

What Wallner et al wrote before the exam was given in the odd (at the time, now very clearly not odd) rebuttal to the Lee/Amdur article:
"most postgraduate training programs have six or fewer trainees and small faculties. In fact, most RO programs possess neither the resources nor the faculty depth and breadth described as part of the authors’ departments. One of us (PEW) served as a faculty advisor for the Association of Residents in Radiation Oncology (ARRO) for six years and became keenly aware of the lack of didactic programming and schooled educators in many of our training programs. Numerous faculty members in these small departments are committed almost full-time to clinical activities, with postgraduate trainee education seen as merely an adjunct to these clinical activities."

Up until this point, pass rates were consistently 90+% over the past DECADE across all programs. No evidence to support his assertion other than his "keen awareness." None. Zero. Evidence to the contrary (increasing USMLE scores). Then suddenly...

What Wallner et al wrote after the exam in response to being called out by ARRO:
However, careful analysis of performance by program size, which will be presented in greater detail at the upcoming meetings, suggests a direct relationship between program size (as one possible surrogate for didactic education) and exam performance. In both the physics and radiation and cancer biology exams, candidates training in programs of 6 or fewer candidates had a remarkable difference in pass/fail rates when compared to their peers who trained in larger programs. These differences were further magnified by the fact that 61% (55 of 90) of the programs reviewed had 6 or fewer trainees, and in the current exams, 46% of the peer group (100 of 217) were trained in those small programs.

He had the power to do whatever he wanted with the exam and the pass rates. To abuse this power would be such a rotten thing. Would somebody who did this agree to a re-evaluation of the exam and offer a re-take? No, of course not. That's what an honest person without an ulterior motive would do. An unethical person would simply lie, present misleading data, spit out logical fallacies/ad hominems, and deflect, even in the face of clear overwhelming evidence to the contrary. So what's happening here?

The saddest part is that he has it so backwards and is criticizing programs that put residents in a clinic a lot. You can be a competent radiation oncologist without being an 80/20 MD/PhD. In fact, probably more so. Although we know you don't want to hear that.

If there is a PD or chair following this please take this to the ADROP meeting, this is absurd what is happening!
 
  • Like
Reactions: 1 user
There is only one way forward and it is brand new leadership. Nothing else. The two “leaders” at the ABR have done irreparable damage to the institution. One is also leading one of the most corrupt organizations in oncology. One leads an academic department. This is not the time for quick fixes. Read the letter. If you don’t see it i cannot help you. You’re also part of the problem or are somewhat dense.

The ABR is in the process of launching a well coordinated attack on “small programs”. This is only the beginning and will be done in conjuction with the ACGME to close down “small programs”, make up the growth in “big name programs”, to place more people in “academic” satellites. What else is the agenda?
 
Wow.

I initially did not buy the "agenda against small programs" theory at all. There's no way that could be true, I thought.

But after reading the ABR letter from ARRO, I was completely wrong. It's written in there clear as day. The agenda is extremely apparent.

That's horrifying.

Ditto. Every time I try to move on and focus on next year’s exam the ABR just says something worse. As KHE88 pointed out, the “6 residents or less” line mentioned to foreshadow poor resident/education quality was immediately, conveniently, and verbatim recalled to explain away a historically and nearly statistically impossible fail rate. Also corroborates the speculation that the highest cut score ever was applied to the physics exam, in order to match the low radbio pass rate.

I sincerely don’t want to believe any ulterior motive is at play but the evidence keeps growing. ARRO/ADROP, please mention these “coincidences” to the ABR as I would love to hear the justification.


Sent from my iPhone using SDN mobile
 
Fair point, but this is NOT the way to solve the residency expansion problem. The geographic maldistribution of radiation oncologists along major coastal metro areas is well documented in the literature. Trying to shut down the small programs in our country's interior in favor of training residents primarily instead at large (7+ residents according to Wallner) programs, which are mostly in big coastal cities is only going to worsen the maldistribution problem. And do you honestly think midwestern spots are just going to disappear and not be compensated for by an increased complement at the big guys, which will be placed at their gaggle of satellites? I'm struggling to understand Wallner's prejudice towards small midwestern and southern programs, but perhaps it is just as simple as he is a snobby coastal elite. People who relocate from New Jersey and New York to southwestern Florida typically aren't the most pleasant bunch. Especially towards young people.
Do you really think Wallner has some irrational bias against small programs (I thought he helped run a defunct small residency program, at Cooper at one time). It is more likely he is just trying to reduce slots (admittedly pretextually) because of where the field is headed with too many docs and too few fractions. Complaining to your chair (who probably wants you to be his fellow)- they are the ones who put us in this position? Having the number of radoncs double over the next 15-20 years is going to hurt you so much more than this exam.
 
Last edited:
  • Like
Reactions: 1 users
Do you really think Wallner has some irrational bias against small programs (I thought he helped run a small program at Cooper at one time) It is more likely he is just trying to reduce slots (pretextually) because of where the field is headed with too many docs and too few fractions. Complaining to your chair (who probably wants you to be his fellow)- they are the ones who put us in this position? Having the number of radoncs double over the next 15-20 years is going to hurt you so much more than this exam.


This x1000

You know what would be easiest for Waller to have done? Just pass everyone. He has a good gig, he doesn’t need to create problems. I think he honestly believes in the validity of the test and method, and you know what maybe he should. Radbio May have content changes - but physics? Hasn’t changed in a decade, correct number to pass was the same as at least one time previously but pass rate way down. Maybe there was some component of sleeping at the wheel on part of the trainees, because physics result never fit into the narrative of changing the exam, sneaking in new content, or having poor study guides. And no one apprars to even consider this. And to be clear, the goal is not to pile on or punish trainees or this class, but if all the tenets of a “corrupt” exam offered for rad bio have no application nor were they even brought up for physics, at what point does that theory need to be re-evaluated??

Trainees should not be punished for an exam, but if the physics exam is based on content that hasn’t changed, text books that haven’t changed because the content is the same, the same raphex practice tests published yearly, and had a percent correct cut off the exact same as at least one previous year, is it really so outrageous to examine the education programs or the preparation for the exam?? Radbio may be a different story if content was changed, but not physics.

And to see the academic leadership care so little for the process, job market, or expansion suddenly care so much when their note writers and fellowship potentials are threatened, and being framed as a good force for trainees, is wrong and short sighted.
 
  • Like
Reactions: 1 users
Highly unlikely, but in a perfect world this could be the window of opening for the Chairs, ABR, ASTRO, etc to say at least something like....

We have seen an anomaly with the pass rate of our board exams, with some preliminary data suggesting that pass rate may correlate with size of programs etc etc. In addition, numerous recent publications have suggested an over-supply of radiation oncologists in the future. Therefore, we have recommended all program expansion be put on a hold to allow us to thoroughly study this manner.....

Then report back to everyone in a few years better statistics on jobs, supply, pass rates, etc and *hopefully* make recommendations for no further expansion.....
 
Doubtful. Having read the textbook, and done very well on Raphex exams, the physics tests questions could not have been repeats from past years. You can easily write some very difficult questions based on the same concepts. Just compare a 2012 Raphex to the most current If you need a reminder.

And if that’s your concern then allow the current class to retake a past physics exam in January. Because I promise the pass rate will be much higher.

And I know for a fact PDs and chairs are reading this thread. It would be nice to hear them speak up and help fix this mess.

Them solution is not difficult or expensive. Especially considering how much we pay for each exam.


This x1000

You know what would be easiest for Waller to have done? Just pass everyone. He has a good gig, he doesn’t need to create problems. I think he honestly believes in the validity of the test and method, and you know what maybe he should. Radbio May have content changes - but physics? Hasn’t changed in a decade, correct number to pass was the same as at least one time previously but pass rate way down. Maybe there was some component of sleeping at the wheel on part of the trainees, because physics result never fit into the narrative of changing the exam, sneaking in new content, or having poor study guides. And no one apprars to even consider this. And to be clear, the goal is not to pile on or punish trainees or this class, but if all the tenets of a “corrupt” exam offered for rad bio have no application nor were they even brought up for physics, at what point does that theory need to be re-evaluated??

Trainees should not be punished for an exam, but if the physics exam is based on content that hasn’t changed, text books that haven’t changed because the content is the same, the same raphex practice tests published yearly, and had a percent correct cut off the exact same as at least one previous year, is it really so outrageous to examine the education programs or the preparation for the exam?? Radbio may be a different story if content was changed, but not physics.

And to see the academic leadership care so little for the process, job market, or expansion suddenly care so much when their note writers and fellowship potentials are threatened, and being framed as a good force for trainees, is wrong and short sighted.
 
This x1000

You know what would be easiest for Waller to have done? Just pass everyone. He has a good gig, he doesn’t need to create problems. I think he honestly believes in the validity of the test and method, and you know what maybe he should. Radbio May have content changes - but physics? Hasn’t changed in a decade, correct number to pass was the same as at least one time previously but pass rate way down. Maybe there was some component of sleeping at the wheel on part of the trainees, because physics result never fit into the narrative of changing the exam, sneaking in new content, or having poor study guides. And no one apprars to even consider this. And to be clear, the goal is not to pile on or punish trainees or this class, but if all the tenets of a “corrupt” exam offered for rad bio have no application nor were they even brought up for physics, at what point does that theory need to be re-evaluated??

Trainees should not be punished for an exam, but if the physics exam is based on content that hasn’t changed, text books that haven’t changed because the content is the same, the same raphex practice tests published yearly, and had a percent correct cut off the exact same as at least one previous year, is it really so outrageous to examine the education programs or the preparation for the exam?? Radbio may be a different story if content was changed, but not physics.

And to see the academic leadership care so little for the process, job market, or expansion suddenly care so much when their note writers and fellowship potentials are threatened, and being framed as a good force for trainees, is wrong and short sighted.

At the beginning of this thread's revival, while there wasn't as much disbelief and befuddlement as there was about RadBio, there was a certain level of incredulousness from a proportion of the physics takers. However, multiple people who felt they did fine on physics ended up failing, likely related to the higher than normal pass line.

The continued "lol maybe you PGY-5s are actually just complete idiots and should've just studied harder, maybe now you'll care about the job market line" is getting really old. You're basically on Wallner's side if you believe the best way to deal with the job market is to fail 30-50% of the PGY-5s in a clinically irrelevant board exam.
 
  • Like
Reactions: 2 users
At the beginning of this thread's revival, while there wasn't as much disbelief and befuddlement as there was about RadBio, there was a certain level of incredulousness from a proportion of the physics takers. However, multiple people who felt they did fine on physics ended up failing, likely related to the higher than normal pass line.

The continued "lol maybe you PGY-5s are actually just complete idiots and should've just studied harder, maybe now you'll care about the job market line" is getting really old. You're basically on Wallner's side if you believe the best way to deal with the job market is to fail 30-50% of the PGY-5s in a clinically irrelevant board exam.


I never said any of that and you need to re-read my post and adjust your tone. I have never been personal and have stressed on multiple occasions that trainees should not be punished. Nor have I ever said the exam should be a solution to a different problem. Ever.

And I doubt Wallner cares 1 bit about the job situation. You really think the path that someone with a great gig who went into private practice to make money is to cause himself more headache over an issue that has no bearing on his career or life?

What I said was, it is reasonable to hold some factors outside content, review material, and changing exam focus accountable for physics. Until the rates were posted, there was maybe one post complaining our physics by the examinees here. And white knighting the people who are actively contributing to a bigger problem is real but that is all it is. Not a stealth way to solve a market problem that will not change based on a single year.
 
I never said any of that and you need to re-read my post and adjust your tone. I have never been personal and have stressed on multiple occasions that trainees should not be punished. Nor have I ever said the exam should be a solution to a different problem. Ever.

And I doubt Wallner cares 1 bit about the job situation. You really think the path that someone with a great gig who went into private practice to make money is to cause himself more headache over an issue that has no bearing on his career or life?

What I said was, it is reasonable to hold some factors outside content, review material, and changing exam focus accountable for physics. Until the rates were posted, there was maybe one post complaining our physics by the examinees here. And white knighting the people who are actively contributing to a bigger problem is real but that is all it is. Not a stealth way to solve a market problem that will not change based on a single year.

“White knighting” ADROP = asking them to stick up for their screwed over residents who even you concede(at least radbio) took a flawed exam? Huh? They may be contributing to the bigger problem but that is completely independent of these exams. Not sure why it keeps going back to the damn job market.

You can’t have it both ways. Can’t say trainees shouldn’t be punished then imply trainees shouldn’t do whatever they can, including leaning on leadership, to push back against being inappropriately punished. It’s disingenuous.

I’m also not sure why you need more than one bad exam for this situation to be considered unacceptable. But would add that physics has at least historically been more variable, and that it’s peculiar that the highest cut score was used despite the lowest pass rate. That doesn’t seem very concordant, except for that it matches the egregiously low radbio pass rate.


Sent from my iPhone using SDN mobile
 
Top