FutureDOCTORR123456789
Full Member
- Joined
- May 25, 2025
- Messages
- 28
- Reaction score
- 4
- Points
- 1
- Pre-Medical
It sounds stupid, but is it the worst idea? Especially if I tell it to be harsh and to not go easy?
How many em dashes did you get??? 🙂I did it...it was decently accurate so far
It can be inconsistent as well when asked the same question, even on things that should have easier to read sources online such as your example.This is one of the dangers of AI. If you don't have any expertise, including in critiquing medical school essays, you won't know what to call it out on when it does get things wrong. IYKYK
To those who don't know, a demon threatened to destroy the entire world and Atem used a spell to seal that demon away using his name as the key. To prevent the demon from being freed, his successor and surviving subjects had all records of his name destroyed. The spell also sealed Atem's soul away, sacrificing his life and he erased his own memories after he sealed away with the demon.
Yeah, but the biggest danger with AI imo is if you have 0 expertise, you won't know what to call it out on. I assume a good chunk here never read or watched yugioh, so I figure it would be a good example. If you have no expertise, you won't know which is the correct answer when it's inconsistent.It can be inconsistent as well when asked the same question, even on things that should have easier to read sources online such as your example.
Yeah, they tend to phrase it in a structured manner that shows it is confident in the answer even when it is far off. The yugioh example's equivalent counterpart would be telling applicants to take the GRE, get 150 hours of hospital volunteering and they'd be a fine candidate.Yeah, but the biggest danger with AI imo is if you have 0 expertise, you won't know what to call it out on. I assume a good chunk here never read or watched yugioh, so I figure it would be a good example. If you have no expertise, you won't know which is the correct answer when it's inconsistent.
OH! I completely forgot. I did exactly what you did with Perplexity's Deep Research function and the initial feedback was off. Then I pointed out the inconsistencies with its reasoning and this was the revised assessment. No change or edits happened with the personal statement. Just me asking it to reassess based on what it overlooked. So yeah, I highly advise against this approach. It wasn't a one shot reassessment. I had it reassess each of its criticisms individually and the assessment became progressively positive.
Also, given I only have 2 interviews so far, it seems a lot of schools don't share the AI's sentiment.
View attachment 411790
View attachment 411791
40, give or take. Admittedly, I haven't been rejected by schools that have rejected a significant portion of their applicant pool yet like Rochester and UChicago and I also got a secondary from UCSF, so there's that so you may be right that its assessment (I presume the second) isn't far off.I don't know how many schools you applied to; it can be argued that AI can't be far off because you only had 2 interviews (which is an alternate premed interpretation... not a position I would take). But the second step you took is very important for many applicants to realize when I ask what one's purpose as a physician is. Too often applicants' reasoning for medicine sounds to me like they would be as happy with a different profession... (you are concerned about mental health... why not clinical psych), but the point is made that you can't take the first output from an AI bot at face value. You have to stand up for yourself.
OH! I completely forgot. I did exactly what you did with Perplexity's Deep Research function and the initial feedback was off. Then I pointed out the inconsistencies with its reasoning and this was the revised assessment. No change or edits happened with the personal statement. Just me asking it to reassess based on what it overlooked. So yeah, I highly advise against this approach. It wasn't a one shot reassessment. I had it reassess each of its criticisms individually and the assessment became progressively positive.
Given I only have 2 interviews so far, it seems a lot of schools don't share the AI's sentiment.
FYI, I also ran my personal statement by a couple public health professors to ensure I correctly differentiated public health from what I was trying to accomplish.
View attachment 411790
View attachment 411791
I never once said "underserved" in my personal statement actually. I did emphasize heavily on innovation and made sure to mention my mentor and how he's helping me so it comes across as grounded, not someone who's due for a rude awakening, given our last conversation (thanks for the benefit of the doubt by the way). But yeah, highly recommend not using AI.Yeah, I don't like Perplexity. If you're avoiding paid subscriptions, consider Gemini Pro (Google One AI is free for a year with an .edu e-mail). Even ChatGPT 5(.1?) mini is available to the public for free with rate limits.
If you are really reading for comprehension, even the second response is a whole lot of nothing. The model has to do rhetorical gymnastics to come up with the outcome you prompted.
For example, it's true that schools look for evidence of innovation... but just proposing a project is not inherently innovative. Given that clinical informatics is a known field with an entire fellowship program supporting it, I struggle to see how the practice of using big health data to support administrative/policy decisions is innovative. Presumably we have always done this manually, technology is just making it faster (and even then, we have to be considerate about the conclusions we make from this data, since we don't presently have a way of "checking the work" without doing it manually, which defeats the purpose). Further, considering neither you nor your mentor collected the data you are using to guide the project in the first place, to call it innovative crosses the line from overstatement to outright fabrication. I'm not poo-poo-ing CI, I had several CI projects...you just have to find a way rhetorically to make the connection.
The remaining points are redundant and generic. Every physician will have a "commitment to the underserved," to one degree or another, by mere coincidence. You could be seeing patients at the most expensive health system on the planet and it would not preclude the possibility of caring for someone who was once living in a precarious situation. Your basic science and future clinical education are designed to help you translate research into evidence-based practice. And everyone aims to tell a story and show reflective capacity through their essays.
I think you would know you're on a good track when the AI is providing receipts. If you're going to make those claims, fine—every school wants people who show these qualities—but they have to be both factually accurate and true to you to be valuable. It needs to be connecting the dots between experiences and reflections so that you are not spending your characters congratulating yourself (which has a tendency to go down like a lead balloon).
Ultimately AI is prompt based, so if you want the outcome to be "strongly recommend for interview," you can get it to hallucinate something that makes sense if we were just to take its word for it. The hard work is making sure the shoe actually fits, and that a human can put two and two together without having to be so on the nose about it. I hope that makes sense!
Your medical school application—shows a compelling blend—of strength, maturity and—commitment. Your clinical and—volunteer work show that you don't just participate—you engage and reflect—thoughtfully.chat I worry about the state of brain rot
why would you use gpt to rate your app lmao
gpt is a glazer
chat I worry about the state of brain rot
why would you use gpt to rate your app lmao
gpt is a glazer
quality over quantity. 🤣Your medical school application—shows a compelling blend—of strength, maturity and—commitment. Your clinical and—volunteer work show that you don't just participate—you engage and reflect—thoughtfully.
(whole time I have 50 clinical hours)
Yeah, but the biggest danger with AI imo is if you have 0 expertise, you won't know what to call it out on. I assume a good chunk here never read or watched yugioh, so I figure it would be a good example. If you have no expertise, you won't know which is the correct answer when it's inconsistent.
Yeah, they tend to phrase it in a structured manner that shows it is confident in the answer even when it is far off. The yugioh example's equivalent counterpart would be telling applicants to take the GRE, get 150 hours of hospital volunteering and they'd be a fine candidate.
(2) any correct information it spews out is purely incidental based on the historical patterns matching.
Perplexity is a search engine. It's useful for finding research papers. But I'm also working on a project that I can't talk too much about because I risk doxxing myself, but I'm working on it with a physician mentor and just had a massive breakthrough thanks to Perplexity, so it's not as bad as you think it is.And this is why, as a ex-FAANG senior software engineer with a PhD and pubs with hundreds of citations in AI, I know enough to never touch a LLM for any useful work task.
Oh, just let it read through the web and give me a starting point? The work of critically analyzing its output to detect possible inaccuracies and fix them is vastly harder than doing the damn job myself, as any experienced line supervisor can tell you. The benefit of having a team (read: some AI bot) is scalability, but I can trivially research a topic sufficiently with Google scholar alone in 10 minutes.
If you need research skills, writing skills, etc., train them. A magic shortcut that jumps you to 80% accuracy, while actively impeding you from improving beyond that, is worse than useless when the professional standard is 95+%.
Repeat after me: a LLM is designed to do exactly one thing. Predict the next word of human input based on data trawled through the internet, etc., violating as much copyright as they can get away with. That's it.
Absolutely nothing about meaning, sense, let alone accuracy*. It generates arbitrary amounts of text that is maximally similar (on average) to the human input it has seen. Which means (1) its grammar and prose are impeccable, and (2) any correct information it spews out is purely incidental based on the historical patterns matching.
Here's an RCT preprint: Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity
Coding is actually one of the best possible use cases for AI tooling and hence the most developed testbed: (1) well-defined problem, (2) easy to automatically evaluate arbitrary output to a quantitative standard.
*: My colleagues are of course trying. In my professional opinion, the community is lightyears away from getting it right. One popular framework is reinforcement learning, where you get humans to tell the model whether it's doing the right thing or not. Two words: remember Tay?
Most people use a search engine the same way and with the same expectations—we know the internet is not exactly fair and balanced on the whole. I think the bigger problem is that it speaks with an authoritative enough tone to convince 90% of people using the technology that it knows something it doesn't. Or that it can even really "know" anything. Or that "it" even is an "it." But, for an exponentially growing set of more casual use cases, LLMs do just fine.
AI definitely isn't what people say it is (I giggle when people say AGI is imminent)... but you cannot argue with everyday, low-stakes applications that are just genuinely helpful.
I think having an LLM read all your essays and reflect back the questions you should be answering in those essays (why medicine, why now, why not x, y, z) was useful. It was also useful to realize I could have also been making implicit points I didn't mean to make (but could have reasonably been interpreted that way).
We are very far from uploading a transcript and some basic information into an LLM and having it spit out the ideal medical school application. I don't think anyone here had plans to attempt that. It would make sense that PhD engineers are not at work doing some stylistic editing on some cute little essays about breaking an arm and realizing you now owe a duty and debt to society to become your one true destiny: an orthopod.
Perplexity is a search engine. It's useful for finding research papers. But I'm also working on a project that I can't talk too much about because I risk doxxing myself, but I'm working on it with a physician mentor and just had a massive breakthrough thanks to Perplexity, so it's not as bad as you think it is.
It was more than just finding a simple peer reviewed journal and yes, it was borderline necessary for my breakthrough.My question is this: was any component of the "AI" part of the search engine at all necessary for your breakthrough? How different would your experience have been had you simply used Pubmed? This includes the skills that you would have otherwise trained in the absence of the AI crutch.
As I posted originally, all these startup AI tools exist because they violate copyright on a planetary scale as their fundamental business model. Consider your ethical sensibilities.
A special purpose tool will always do vastly better than a general one: no free lunch. LLM engineering has already moved into specializing many, many different use cases, as it is in fact the standard way to obtain acceptable performance.
At this point AI (NLP) serves mostly as a "conversational" front-end interface to the specific tool, and the actual search itself doesn't actually need AI to begin with. This is the approach of WolframAlpha, which predates the AI malignancy.
The critical area which makes all the difference: a search engine is a retrieval tool. It shows you links to pages, where you get to scrutinize the primary source and make your own decisions.
A LLM shows you a constructed synthesis where the connection to the (possibly nonexistent, ffs) sources are uninterpretable, and critical thinking is effectively impossible without already knowing the correct answer, whence it becomes a pure efficiency tool in the shape of a foot-autocannon.
AI, in its current massive ANN form, does have an actual use: in very high sensitivity rule-out screening tests, to reduce the workload in vigilance tasks which humans are evolutionarily terrible at. Think audits.
Summary – Conservative Prediction
Total predicted IIs: 15
Total predicted Acceptances: 12 (including 1 from WL movement)
Total predicted Waitlists: 3 (1 likely to accept, 2 likely to reject)
Total predicted Post-II rejections: 7
Total predicted Pre-II rejections: 14
If you notice I'm actually not debating that point... I'm just saying the internet more broadly also doesn't exactly cite its sources, so nothing's really changed. You'd have to already be an expert to do expert-level research.
When I was growing up in the age of dial-up, the popular thing to say was "Wikipedia is not a legitimate source." Now they say "LLMs make mistakes." If anything, it is the "telephone game" effect where synthesis of already unreliable information is likely to stray even further from the truth, but it doesn't seem to be as much a problem of categorical lack of utility as it is one of epistemology and how accessible ground truth data could realistically ever be, assuming current limitations.
I will reiterate there are more proximal use cases that are much more tolerant even of wrong answers. Sometimes you need a sounding board to bounce concepts off of. I've found it immensely helpful in identifying literature that might help me further explore some specific topic of interest.
And again, I'll reiterate I am just as skeptical, but some folks really push the hypothetical limit of what these technologies are actually being used for. If people are categorically using LLMs to do important, consequential work, they should be held responsible in accordance to whatever integrity policy governs their work products... I suspect that a retraction of LLMs (if even feasible at this point) would just result in the contraction of use to precisely the high-discretion knowledge workers privileged enough to access them, which makes your specific point moot. Lazy SWEs will still attempt to vibe-code.
The paternalistic attitude just kinda tickles me, especially in the setting of the populist "personal responsibility" FAFO politics of 2025. Pretty soon full-grown adults are going to need accountability buddies to cross the streets. People are already unaliving themselves from chatting with a computer program. Personally, I think the genie is already out of the bottle. Darwin and Freud win again.
*: It's not even the issues I've mentioned so far. Machine learning doesn't solve arbitrarily hard pattern matching problems, it transforms them into, in some aspects, the even harder problem of data cleaning, hidden bias elimination, and essentially trying to force the infinitely literal computer into doing what a human thinks is the "correct" thing. At a scale where manual checking is ~impossible.
I struggle to see how the practice of using big health data to support administrative/policy decisions is innovative. Presumably we have always done this manually, technology is just making it faster (and even then, we have to be considerate about the conclusions we make from this data, since we don't presently have a way of "checking the work" without doing it manually, which defeats the purpose).