I just knocked out all 30 for Q1.
I know my score is totally meaningless since this is a Yoda-style "do or do not" kind of test, and everyone who does it gets credit, and I did it. But I can't help but microanalyze the results of every test I ever take.
The bad:
I was re-asked a question I'd answered correctly the first time. WTF?
I was re-asked a question I'd missed, and got it right the second time around. Yay? If I'm to be re-tested on a topic, how about a different question? $210 per ABA diplomate per year can't produce more than 30 unique questions per quarter?
There were 4 questions that were essentially identical in the concept tested. I won't give it away and spoil it, but it involved strategies to reduce a particular postop risk. I got that question right the first time, and the second time, and the third time, and the fourth time.
I feel a little ripped off that my $210 didn't even get me 30 different questions. Or better yet, a different question on a topic related to one I missed.
Three questions were answered correctly by 97%, 99%, and 99% of takers. Questions like this are filler. Honestly, ABA, what the hell? You had one job. Come up with 30 whole questions by January 1st, 2016 and 10% of them were filler.
Most of the questions were too straightforward to be interesting. The basic problem though is that you need more than 60 seconds to read, digest, think about, and answer an interesting question. I don't know how MOCA Minute gets past that problem.
When I did practice questions from ACE or Hall, I missed plenty, but never disagreed with the provided answer and explanation, and I never thought the answers were unclear or ambiguous. I missed two questions where I thought my answer was better than the one provided.
The good:
There were 3 or 4 questions that I thought were really very good ones. Relevant topics, good scenarios, good choices, good explanations.
It was convenient. The web site worked. I'm incrementally closer to recertification this cycle, and I didn't have to waste a day at a test center.
Answers were well referenced.
I still like the idea. I think they could use some better (harder and longer) questions, with longer time limits. But I still think the overall idea is sound.
It was sort of interesting to see what % of other people missed the same questions I did. Average of the peer performance for those 30 questions was 76.1%. Some questions were clearly outliers (97%, 99%, 99%, 44%, 53%) but most questions were answered correctly by about 65-85% of people.
I finished at exactly 80% but I can't say I'm either proud or embarrassed by that.