AI detection concerns

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

nathankazuto1

New Member
Joined
May 28, 2025
Messages
3
Reaction score
0
I have been writing my AMCAS application with the help of AI. First, I write out the whole entry, then I use ChatGPT to help fit it into the character count, restructure sentences, and then I further edit it myself. I put my work through AI detectors, some entries are said to be 100% human, and others are flagged as 100% AI (despite the content being original to me aside from some word choices/sentence structures). Should I be concerned? This is the policy from AMCAS's website:

"I certify that all my writing, including personal comments, essays for MD-PhD applicants, and descriptions of work/activities, is my own. Although I may utilize mentors, peers, advisors, and/or AI tools for brainstorming, proofreading, or editing, my final submission is a true reflection of my own work and represents my experiences. I acknowledge that no changes can be made after submission and will thoroughly proofread my work. Quotations are allowed if I cite the source."

It seems like I have been following their policy pretty strictly, but I worry that I could get wrongly flagged on some random entries that are said to be "100% AI-generated." I would hate for this to be the reason I get rejected, especially since I have put a lot of time and thought into my entries. I would love to hear the thoughts of some adcom members. Thank you!
 

No detector is 100% reliable. I don't know if you are running through ChatGPT simple text files so that embedded meta/watermarking code doesn't get copied over, but you control what you write and submit. Just be careful and aware we're all likely using some AI tool; it's how you use it. Keep your version history if you need to.
 

No detector is 100% reliable. I don't know if you are running through ChatGPT simple text files so that embedded meta/watermarking code doesn't get copied over, but you control what you write and submit. Just be careful and aware we're all likely using some AI tool; it's how you use it. Keep your version history if you need to.
Thank you for your thoughts! Would you say I am utilizing AI appropriately? What do you mean by "keep your version history"?
 
AI is a tool with a differential value proposition aligning roughly with a normal curve. A small proportion of people will be able to extract virtually nothing from it (boomers), or a disproportionate amount of value from it (power users: people working on the horizons of their fields). For most people, the value of AI is limited—and that's what's creating this emerging "AI slop" dysphoria.

AI can be really helpful in certain narrow tasks, but you have to know how to use it. For example, if you're a bad writer, and you feed a draft of a bad essay into ChatGPT, it will probably convince you that the essay it will output is better than what you gave it.

But, if you're a bad writer, you can't really tell the difference between good and bad writing, so you're less likely to realize what makes certain pieces of writing seem robotic or unnatural. The lifetime academic reading the essay will probably be able to flesh this out better than an AI detector can.

Good writers, I think, can use AI effectively. Where I think there is potential synergy of opportunity for students is in leveraging AI as sounding board for narrative storytelling. I had tens of thousands of pages of diary entries throughout college—do you know how insanely useful it was to upload them all as a PDF to ChatGPT and have it select experiences for my essays? I could have spent months just reading those diaries, some useful for this purpose, some not—but it helped me streamline a workflow.

Once you engage with your writing to this degree, it doesn't matter that AI helped you along the way—you wrote every word. It is your story, in your own words. You just got to them sooner than you would have otherwise.

All of this to say, you're probably overthinking it. As long as you can say that those words are yours with your chest, you're probably fine. If you referenced 6 esoteric philosophers in your essay and get a question that you can't answer about it, I think that will inspire more scrutiny.
 
Which detectors are you using that are saying 100% original, and which are saying 100% AI?
 
I have been writing my AMCAS application with the help of AI. First, I write out the whole entry, then I use ChatGPT to help fit it into the character count, restructure sentences, and then I further edit it myself. I put my work through AI detectors, some entries are said to be 100% human, and others are flagged as 100% AI (despite the content being original to me aside from some word choices/sentence structures). Should I be concerned? This is the policy from AMCAS's website:

"I certify that all my writing, including personal comments, essays for MD-PhD applicants, and descriptions of work/activities, is my own. Although I may utilize mentors, peers, advisors, and/or AI tools for brainstorming, proofreading, or editing, my final submission is a true reflection of my own work and represents my experiences. I acknowledge that no changes can be made after submission and will thoroughly proofread my work. Quotations are allowed if I cite the source."

It seems like I have been following their policy pretty strictly, but I worry that I could get wrongly flagged on some random entries that are said to be "100% AI-generated." I would hate for this to be the reason I get rejected, especially since I have put a lot of time and thought into my entries. I would love to hear the thoughts of some adcom members. Thank you!
There is no AI detector that is specific enough to say you definitely did use AI. If there even was one, it would be very short lived.

I worked for a company that trains AI as a side hustle. There are 7 versions of chatgpt I can use right now that are all trained differently than the other. There's 5 versions of Claude, 2 versions of google, Meta's llama 4 maverick, and a pre-release version of Astral by OpenAI.

Basically, you're fine unless you outright admit it, but you should still be cautious when using AI, and be your authentic self.
 
It's so easy to tell if you used AI. The writing either sounds overly polished and sanitized, or it lacks the natural cadence of human English or it uses a certain vernacular that isn't as common with how we use English or maybe it lacks the storytelling aspect. I can tell if something like a personal statement has been written with AI and I wasn't even an English major. Imagine how easy it is for Adcoms, whose only job is to read these essays.
 
It's so easy to tell if you used AI. The writing either sounds overly polished and sanitized, or it lacks the natural cadence of human English or it uses a certain vernacular that isn't as common with how we use English or maybe it lacks the storytelling aspect. I can tell if something like a personal statement has been written with AI and I wasn't even an English major. Imagine how easy it is for Adcoms, whose only job is to read these essays.
We do have other jobs than reading these essays haha--we are also your medical school faculty! But I agree that an essay done completely with AI is easy to detect. I'd rather read an awkward but authentic essay than a polished but generic AI-generated one.
 
Top