Oh, you mean the medical common knowledge exam? Step 3 (the multiple-choice section, anyway) is not significantly different from step 1, other than being more focused on clinical applicability. It's still mainly pattern recognition; it's also regarded as much easier. Source: I have taken both exams.
That's the thing. Doctors don't want something that can "parse through literature." We already have that (uptodate), and we already see the impacts of people (NPs or MDs) trying to treat conditions they don't know much about, based on general sources like that. A small but notable proportion of the cases I see are people with a condition that their NP/PCP or inpatient team identified, looked up how to treat, and tried treating it, causing more complications than good.
So when I go into chatgpt and ask "what is the preferred treatment for [condition X]," I can get a list of answers. But if I ask how to choose one treatment, or the evidence behind those recommendations, chatgpt refers me to pubmed. Now, perhaps in the future that will change, but I don't see AI as much more than a supplement for medical students (or for curious patients) at this time. Also consider that, for many, many clinical questions, there are no answers in literature. Not to mention dealing with patients who don't want to follow the standard of care.
I'm not trying to discourage you, and would be very interested to learn the medical student experience with AI in the coming years. But I think the disconnect with early-training medical students is that it is assumed challenges in medical treatment arise from a lack of readily-available knowledge - i.e., the answer is in front of us, we just need help seeing it - and not from the facts that the human body - and medications we give it - do not always behave in a predictable manner, patients do not provide accurate histories, lab tests can be misleading, and that medicine is truly more of an art than a science. You will likely not appreciate this until residency.