In-service

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

bcl2

New Member
15+ Year Member
Joined
Nov 11, 2005
Messages
40
Reaction score
0
I just destroyed that thing. Good thing I reviewed ampulla of Vater last night.....

Members don't see this ad.
 
LOL, me too! I was thrilled to see so many childhood leukeimia and Ewing's questions!
 
Of course penile and urethral carcinoma make their annual appearance :D

It's almost a given to know penile/urethral staging for the exam
 
Last edited:
Members don't see this ad :)
How does the test compare to the actual written boards?
 
According to our chief relative to RadBio/Physics it (in-service) is supposedly orders of magnitude easier. I cannot personally comment on this but perhaps others can.
 
According to our chief relative to RadBio/Physics it (in-service) is supposedly orders of magnitude easier. I cannot personally comment on this but perhaps others can.

The inservice is far easier than the physics boards. Radiobio boards were pretty easy if you studied the old tests and used the study guides, IMO.

The clinical writtens, from what I have heard and seen in recalls, tends to be more relevant than the questions asked on the inservice. Certainly less mesothelioma, urethral, penile, ovarian and vaginal cancer than is tested on the inservice.
 
Probably a ridiculous request but if anyone has gone through the exam and has the correct answers could you please post them, thanks.
yg
 
If everyone did 10, wouldn't take so long...

Anyway, the test is terrible. The Education Committee of ARRO should write a letter. They actually ask about the study name/number rather than the concept of the study. The ask about the TG groups. It's lousy. I wonder who writes the damn thing. It's fine that they ask about rare cancers, but the stuff they ask about common cancers should be relevant. And the percentage questions were useless, too, b/c the answer choices would be too close together. I could go on all day. So glad I don't have to take it again.

S
 
If everyone did 10, wouldn't take so long...
It's lousy. I wonder who writes the damn thing.
S

Well I do...part of it, anyway. ;)

And if you think it's bad now, you should have seen it 10-15 years ago, when it bore no relevance to anything (least of all the ABR Writtens), and many of the questions were even more poorly written, outdated and irrelevant than they are currently.

You have to realize that:

(a) these questions are only run by a couple of people who attempt to clarify, edit, and check for accuracy, however admittedly, they receive nowhere near the vetting that ABR questions receive;

(b) VERY few people are available, let alone willing, to write for the ACR In-Training exam, either because they have no time or incentive to do so, or else are already writing questions for the ABR Written Boards (a writer is technically not allowed to do both, as it would represent a conflict of interest); and

(c) especially with respect to the clinical questions on the In-Training exam, please realize that all 200+ questions are almost invariably written at the last minute (so no pre-editorial screening is possible), and all have to be discussed, edited and selected for inclusion on the exam by a diverse group of around 10 people during a single, 8-12 hour marathon session.


Also, interesting that some people consider the In-Training the more difficult exam, and others the Written Boards.
(My tendency is to think the In-Training is a bit easier, particularly in biology and physics, although I might be wrong.)

One thing I can state though is that the free ASTRO Radiation and Cancer Biology Practice Exam and Study Guide is deliberately pegged "higher" than both the In-Training and Written Board exams, in keeping with its main role being as a study aid.

Finally, I should mention that the ACR posts on their website a final copy of the annual In-Training exam, an answer key, and answer justifications about 3 months after the exam, so be on the lookout for it (particularly once you receive your scores). Unfortunately, they don't advertise this well, and only leave the thing online for a relatively short time period, so you gotta be quick!
 
thanks for the post. good to know what goes on behind the scenes.
 
Members don't see this ad :)
Sorry - I guess insulting some unknown examiner is pretty easy. Now I feel a little bad. I know it's hard to write good test questions. It just seems like the oral boards (based on recalls) actually tests you on clinical knowledge/technique while the written exams seems to focus on trivia.

Since you are one of the writers, I'm curious - why the desire to test numbers so hard and also rare cancers? For example, the question that asks for the recurrences on the CW and regional LNs with no RT in NSABP B-18 to B-27 (the ranges of percents goes from 1.5 - 7% for CW, and 1.5 - 4.5% for regional LNs) or the most common site of LN metastasis in a patient with penile cancer or the primary treatment of male urethral carcinoma or the staging of ampulla of Vater cancer. The incidence of these questions on the exam are about a 1000x higher than the incidence of the disease :)

Because there are some good questions - just looking at it right now, there is question regarding what volume/dose to treat for a early stage glottic cancer (very relevant) or what CTV margin for the pelvic nodes (based on RTOG guidelines) or what % of early stage tonsillar carcinomas will fail in the opposite neck if only the ipsi is treated. I just would presume the writers (if they are younger or mid-career) would want to test relevant clinical knowledge/technique that is based on the current evidence/guidelines rather than trivia.

With the key, would it be possible for the ACR to also list the percent of examinees that got a particular question correct? That might be helpful, as well - i.e. what I think is trivia might actually be common knowledge and something I need to know.

-S
 
Lousy test was obviously written on a plane, while coming back from some nice conference.
The problem with writing letters, etc, is that you may easily be complaining on your prospective employer )

If everyone did 10, wouldn't take so long...

Anyway, the test is terrible. The Education Committee of ARRO should write a letter. They actually ask about the study name/number rather than the concept of the study. The ask about the TG groups. It's lousy. I wonder who writes the damn thing. It's fine that they ask about rare cancers, but the stuff they ask about common cancers should be relevant. And the percentage questions were useless, too, b/c the answer choices would be too close together. I could go on all day. So glad I don't have to take it again.

S
 
Having taken other in-service exams for other fields (and discussed board exams as well) this may provide a different perspective. The general way these things are generally approached are as follows:
1. Uncommon questions/presentatinos of common diseases that we should know backwards and forwards. i.e. breast cancer recurrence patterns.
2. Common or basic questions/presentations of uncommon diseases. i.e. ampulla of vater staging
When looking at these things that way it makes sense... to me anyway.
 
Since you are one of the writers, I'm curious - why the desire to test numbers so hard and also rare cancers? For example, the question that asks for the recurrences on the CW and regional LNs with no RT in NSABP B-18 to B-27 (the ranges of percents goes from 1.5 - 7% for CW, and 1.5 - 4.5% for regional LNs) or the most common site of LN metastasis in a patient with penile cancer or the primary treatment of male urethral carcinoma or the staging of ampulla of Vater cancer. The incidence of these questions on the exam are about a 1000x higher than the incidence of the disease :)

Have to laugh about this...would you believe that EVERY SINGLE YEAR I walk into the room where the clinical group is vetting questions, the first thing out of my mouth is "Is it really necessary to test on penile, urethral and ampulla of Vater cancers? The residents hate us for that, you know." :laugh:

But seriously, what it comes down to is what the teams of question writers choose to write about...and you can't really demand they write on certain topics but not others, de-emphasize the percentages, or mix things up more, when they're all volunteers in the first place. (Personally, I do try to ask politely, which sometimes works but sometimes doesn't.)

That's also why it seems like practically the same questions or at least, question content, is repeated year after year.

With the key, would it be possible for the ACR to also list the percent of examinees that got a particular question correct? That might be helpful, as well - i.e. what I think is trivia might actually be common knowledge and something I need to know.

Unfortunately, all the stats concerning how well each question "tests" (i.e., does the distribution of responses clearly discriminate between the lower, mid and upper third of test performances) are kept confidential by the testing company.

I've always been interested in this myself actually, if for no other reason than to get a sense of what kinds of questions work and what don't.

That being said, one thing I do know is that when you look at the answer key and you see "NS" next to a particular question, that means "Not Scored"...in other words, that the question didn't "discriminate" well and that either nearly everybody either got it right or wrong.

Lousy test was obviously written on a plane, while coming back from some nice conference.

I should be so lucky as to be even attending some nice conference, let alone find the time to be writing questions on a plane home! ;)

The reality is that I spend quite a bit of time each summer on writing, editing, fact-checking, composing answer justifications (with references) and trying to balance question content for both my own questions and those of my fellow writers. And THEN I have to work with the testing company to get the things shortened to contain the minimum number of distracting adjectives, and worded the way they want, while still trying to retain clarity and relevance. And THEN, in the fall, I have to go and defend my questions again in front of the assembled panel of group leaders from all the biology, clinical and physics specialties and sub-specialties.

Granted, I do get to stay at a very cushy Ritz Carlton for a couple of nights ( :cool: )...but otherwise, the entire process is actually a labor of love, not to mention hard work.
 
Does anybody know whether in-service exam's answer has been posted? If so, where can we download it? Thanks,
 
Well I do...part of it, anyway. ;)

And if you think it's bad now, you should have seen it 10-15 years ago, when it bore no relevance to anything (least of all the ABR Writtens), and many of the questions were even more poorly written, outdated and irrelevant than they are currently.

You have to realize that:

(a) these questions are only run by a couple of people who attempt to clarify, edit, and check for accuracy, however admittedly, they receive nowhere near the vetting that ABR questions receive;

(b) VERY few people are available, let alone willing, to write for the ACR In-Training exam, either because they have no time or incentive to do so, or else are already writing questions for the ABR Written Boards (a writer is technically not allowed to do both, as it would represent a conflict of interest); and

(c) especially with respect to the clinical questions on the In-Training exam, please realize that all 200+ questions are almost invariably written at the last minute (so no pre-editorial screening is possible), and all have to be discussed, edited and selected for inclusion on the exam by a diverse group of around 10 people during a single, 8-12 hour marathon session.


Also, interesting that some people consider the In-Training the more difficult exam, and others the Written Boards.
(My tendency is to think the In-Training is a bit easier, particularly in biology and physics, although I might be wrong.)

One thing I can state though is that the free ASTRO Radiation and Cancer Biology Practice Exam and Study Guide is deliberately pegged "higher" than both the In-Training and Written Board exams, in keeping with its main role being as a study aid.

Finally, I should mention that the ACR posts on their website a final copy of the annual In-Training exam, an answer key, and answer justifications about 3 months after the exam, so be on the lookout for it (particularly once you receive your scores). Unfortunately, they don't advertise this well, and only leave the thing online for a relatively short time period, so you gotta be quick!

Speaking of the ACR answer rationale, I've got some serious problems with question 132 from the 2010 exam:

What is the estimated probability that a patient with clinical stage T1c prostate cancer with a Gleason score of 3+3 and a PSA level of 6 ng/mL has extracapsular extension?
A. <10%
B. 20%
C. 30%
D. >50%

Correct answer is A. RATIONALE: Approximately 30% of patients with early-stage prostate cancer have extracapsular extension (ECE) of only a few millimeters. The radial distance of ECE is an important measure that influences treatment strategies for patients with localized prostate carcinoma, especially for the use of brachytherapy. In this particular case, the risk of positive nodes is ~11% [using the Roach formula of 2/3 PSA + (Gleason score&#8722;6) x 10]. That is, 2/3(6) + (6&#8722;6 = 0) x 10. 2/18 + 0 = 0.11 (or 11%). The probability of +ECE in this specific case is ~30% x ~11% = ~3% to 4%. REFERENCES: Roach, M. Equations for predicting the pathologic stage of men with localized prostate cancer using the preoperative prostate specific antigen: Journal of Urology. 1993;150:1924-1924. Halperin EC, Perez CA, Brady LW, eds. Low- Risk Prostate Cancer. Perez and

WHAT?!?

As I understand the Roach equations, there are several mistakes in this explanation.
1) In calculating the risk of positive nodes, the equation is 2/3 x 6 = 4, not 2/ (3x6) = 2/18
2) The Roach equations calculate percentage (so the examiner's calculation should be interpreted as 2/18 + 0 = 0.11%)
3) The risk of ECE is meant to be estimated solely on the equation rather than multiplying the risk of positive nodes by the generic risk of ECE in early stage prostate cancer (which the examiner used to calculate the answer of 3-4%)
4) Finally, the answer is incorrect based on either the Roach equation for ECE [3/2 x 6 + 10 (6-3) = 39%] or the Partin tables [19% (95% CI 16-21)].

Questions like these not only damage the validity of the test results, but also will inevitably cause future residents to misunderstand how to use the Roach formulas.
 
Wow. That is amazingly poor reasoning/rationale for the answer. Madame Curie, put in a phone call for that one... Sheesh.

An aside: Since I started residency and got pimped on Roach formula, it always annoyed me how off the mark it was, compared to modern Partin tables. In an era where we have computers in every room and smart phones in our pocket, I don't get the rationale of using an equation that overpredicts low-risk patients' risk and underpredicts high-risk patients' risk when it takes 15 seconds to look up the Partin table. I still don't know the equations off the top of my head.

-S
 
So glad that's over. Lots of Hodgkins and Bladder this year, and, of course, they had to indulge us with penile/urothelial/LCH medley.
 
Got to hand it to them, they really pulled out a curveball with the incessant hammering of fallopian tube cancer. Didn't see that one coming...
 
My God! I knew I should have looked at Fallopian tube staging!!

-R
 
My favorite is when you look at a question after and you literally have NO IDEA how to even begin to look it up, because the answer doesn't exist. What is the point of this exam? To ensure that all graduating radiation oncologists are also board-certified fallopiologists?
 
My favorite is when you look at a question after and you literally have NO IDEA how to even begin to look it up, because the answer doesn't exist

True. Which is why I'll be letting them do the work for me when they post the explanations in a few months. Of course, half the time the explanation is something like "this requires knowledge of (insert concept here)"
 
Took it yesterday here in Canada .. Pretty awful as I didn't really review any fallopian/penile/urethra !!!!!

Just curious how different programs treat the exam in terms of performance ? (ie. does your program care more about improvement vs. actual score?)
 
My program actually looks at these scores for possible remediation and I don't think they know what kind of ridiculous questions are asked on these things...:(

The sheer volume of diseases that we have extremely little or no exposure to really needs to be addressed (unless the actual boards are this way!??!?!?!?!)
 
Wow, I feel for you dude. No resident should be held against the standard of that poorly written test!
 
Our faculty has specifically told us not to study and that they don't care how we do. If anything, they may use it to adjust our academics but that's about it.
 
My program actually looks at these scores for possible remediation and I don't think they know what kind of ridiculous questions are asked on these things...:(

The sheer volume of diseases that we have extremely little or no exposure to really needs to be addressed (unless the actual boards are this way!??!?!?!?!)

Wow that sucks ... considering it is a horrible exam to begin with. I'm pretty sure most of us here don't pass the exam the first time we take it .. lol
 
Over-emphasis on trial recall has been recognized, and TXIT is IMHO improving.
 
Top