Do survey projects count as "research"?

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

betalactam

New Member
Joined
Oct 7, 2022
Messages
3
Reaction score
1
Hey all, I'm currently leading a project that involves a state-wide knowledge/practices assessment among clinicians regarding a relatively unknown but increasingly relevant medical condition. I will be weaving an educational initiative into the initial survey and will be re-assessing knowledge and practices at 6 months. I'm really hoping to get a better understanding of if/how physicians are treating this particular condition and assess the effectiveness of a short CME-like activity in improving the care of patients with this condition.

While this project does involve data collection, it's no way a rigorous clinical study or basic science project, and it almost seems like this might fall more into a category of an educational outreach project rather than actual research. Not to be neurotic or anything, but would you guys say that this falls into the category of research? Does it sound like a "fluff" project?

Members don't see this ad.
 
It’s definitely not a meta analysis…

Whether or not this is “research” depends on the questions you are asking and how rigorously you are trying to answer them.

If you’re just doing a quick CME kind of survey after an educational activity to show, “look after our education our learners confidence level went from a 2 to a 4,” then no that isn’t research. It can still be a meaningful activity, but you’re not trying to generate new scientific data which is what research is. If you really want this to be research, then you need to much more rigorously define your scientific questions up front (ie what is your hypothesis, what is your primary and secondary objectives and endpoints, etc) and figure out how you want to answer those questions using your survey.
 
  • Like
Reactions: 3 users
Members don't see this ad :)
Thanks for everyone's replies. Yes, I will be testing the hypothesis "physicians in X geographical region are poorly equipped to diagnose and treat patients with Y condition." The data I will be collecting in the initial survey will be basic knowledge questions about Y condition, as well as collecting numerical data on how many times they have seen patients or ordered appropriate testing for patients with Y condition. Since I'm fairly certain that there is low knowledge of this condition based on my informal surveys, I figured that I would just go ahead and provide some education with the initial survey. Then at six months, I will be again collecting data on how well they can answer basic management questions and how many patients they have seen or tested for Y condition.

So I guess you could say its a cross-sectional survey with CME combined in there, and then a follow-up cross-sectional survey to quantify how knowledge and practices have changed.
 
Do you have an educational initiative between the surveys? It sounds like yes in the form of a CME component.

How do you define poorly equipped? Just because a provider doesn’t know about a condition doesn’t mean they can’t quickly look it up, educate themselves, and treat a patient.

A better project would ideally involve an initial survey, education, then another survey with a hypothesis of “I hypothesize that I can increase knowledge of Y condition by my education initiative”. You then statistically analyze pre vs post survey answers to see if you prove that hypothesis.


I’m not trying to bad mouth your project as it is. I’m trying to get you thinking about it and ways you could improve it.
 
Last edited:
Do you have an educational initiative between the surveys? It sounds like yes in the form of a CME component.

How do you define poorly equipped? Just because a provider doesn’t know about a condition doesn’t mean they can’t quickly look it up, educate themselves, and treat a patient.

A better project would ideally involve an initial survey, education, then another survey with a hypothesis of “I hypothesize that I can increase knowledge of Y condition by my education initiative”. You then statistically analyze pre vs post survey answers to see if you prove that hypothesis.


I’m not trying to bad mouth your project as it is. I’m trying to get you thinking about it and ways you could improve it.
Yes, there will be an educational initiative in the survey (essentially baseline assessment + education).

I was thinking poorly equipped as in this is a relatively recently described condition, so few physicians are even aware that it exists and wouldn't even know to look for it if a patient presented with symptoms consistent with the disorder. Maybe I should be saying that "physicians are generally underprepared to treat Y condition." I would be defining "underprepared" as <75% correct on basic management questions regarding the clinical presentation, diagnosis, and management of the condition.

I had initially thought about doing a three stage process like you suggested above, but I was told that it might be difficult to get people to follow through on a three-part survey (which was why I wanted to combine the education into the initial survey).

Ultimately, I'm okay if this doesn't end up being some rigorous scientific study as my overarching goal is really to spread awareness and reduce the underdiagnosis of this condition, but if there is some way that I can get the added benefit of "research" on my CV, that would be great too.

Again, I appreciate all your feedback and suggestions.
 
  • Like
Reactions: 1 user
I'm going to disagree. Yes, this could be considered a research project. It perhaps fits better into QI work, but still involves a baseline assessment, some intervention, and then a reassessment. Calling it QI gets around the IRB and any concern about consent.

Will you be able to present / publish this type of work? That will totally depend upon the quality of the project. These types of projects tend to have several big problems, which you should be aware of up front:

  1. Getting people to return surveys is very difficult. Everyone is busy. This is just more work to the day. Very low return rates are common unless you have a captive audience. Survey the residents for feedback about the program and tie it to their on call food money and the return rate is great. Just ask people to do your survey and expect at best a 15% return rate -- unless you have some hook.
    1. This creates a problem of bias. Perhaps the 10% of people who return your survey are interested in your topic. It might look like you've made a huge improvement. But really 90% of people just ignored you.
  2. Writing a good survey is very difficult. It is very easy to end up writing a survey that tells you exactly what you want it to. Writing questions that are neutral about a topic that is meaningful to you is very difficult. Ideally, your pre- and post-surveys are exactly the same. Asking a question like "how much more did you diagnose XXX after this education?" is a classic way to get into trouble -- people don't like missing things, they will tend to answer this question positively.
  3. Timing is important. If you recheck too early, people won't have had a chance to change AND they will still remember all of your teaching and may answer your questions correctly from sort term memory. Better to let a reasonable amount of time go by -- but that can be hard on a student timeline. If you check at 3 months, will physician behavior be any different at 1 year? And if not, has your intervention been any help at all?
  4. The statistics are a bit tricky. Since you'll be surveying the same people over again, you're (usually) going to need to use a paired test (like the paired t test). Not insurmountable.
  5. You'll need to decide whether you're going to have a survey follow by some education, or whether the survey itself will be the educational intervention. In that case, it's not exactly a "survey" but some sort of questionnaire or quiz. We did this with insulin management -- sending out weekly questions with detailed answers, with the idea that answering the questions incorrectly would result in people learning more about insulin management (and then getting similar questions in the future correct).
Good design up front is critical. You could put all sorts of work into this only to discover that your results are useless. And you need to assess whether consent is needed (which it doesn't sound like it, but consent can be tricky). Ask yourself what the maximum amount of your time this could possibly take and write that down. Then multiply by 5. That's how long it will actually take. You think I am joking. Ask anyone here.
 
  • Like
Reactions: 5 users
I'm going to echo NAPD. This CAN be research (and you could also call it QI), but that doesn't mean you'll get much out of it as a project.

1) Response rate is huge in getting it published. I have a survey that I did nationally that I only got a 10% response rate on (for various reasons), and no journal I submitted it to would consider it with that response rate.

2) Doing an educational initiative and then assessing whether people learned after the initiative won't get a ton of points for you--you're comparing them to a situation where you didn't provide the education, so of course, education would improve things. Educational research studies require something more than pre/post testing and satisfaction with the material. If you can find a way to *objectively* measure changes in practice, you might get somewhere. And doing that state-wide is going to be challenging.

3) Survey design is much harder than people think it is. A poorly written survey will get you no where. If you're doing a local project just for kicks, not a big deal, but to actually get something published? You want to use questions that can be validated in some way, which is challenging.

4) Will you be doing qualitative and quantitative data? Do you have someone experienced in qualitative data on your team? Asking a few open response questions doesn't count as qualitative data if you're not rigorous in the design of the survey.
 
  • Like
Reactions: 1 user
I'm going to disagree. Yes, this could be considered a research project. It perhaps fits better into QI work, but still involves a baseline assessment, some intervention, and then a reassessment. Calling it QI gets around the IRB and any concern about consent.

Will you be able to present / publish this type of work? That will totally depend upon the quality of the project. These types of projects tend to have several big problems, which you should be aware of up front:

  1. Getting people to return surveys is very difficult. Everyone is busy. This is just more work to the day. Very low return rates are common unless you have a captive audience. Survey the residents for feedback about the program and tie it to their on call food money and the return rate is great. Just ask people to do your survey and expect at best a 15% return rate -- unless you have some hook.
    1. This creates a problem of bias. Perhaps the 10% of people who return your survey are interested in your topic. It might look like you've made a huge improvement. But really 90% of people just ignored you.
  2. Writing a good survey is very difficult. It is very easy to end up writing a survey that tells you exactly what you want it to. Writing questions that are neutral about a topic that is meaningful to you is very difficult. Ideally, your pre- and post-surveys are exactly the same. Asking a question like "how much more did you diagnose XXX after this education?" is a classic way to get into trouble -- people don't like missing things, they will tend to answer this question positively.
  3. Timing is important. If you recheck too early, people won't have had a chance to change AND they will still remember all of your teaching and may answer your questions correctly from sort term memory. Better to let a reasonable amount of time go by -- but that can be hard on a student timeline. If you check at 3 months, will physician behavior be any different at 1 year? And if not, has your intervention been any help at all?
  4. The statistics are a bit tricky. Since you'll be surveying the same people over again, you're (usually) going to need to use a paired test (like the paired t test). Not insurmountable.
  5. You'll need to decide whether you're going to have a survey follow by some education, or whether the survey itself will be the educational intervention. In that case, it's not exactly a "survey" but some sort of questionnaire or quiz. We did this with insulin management -- sending out weekly questions with detailed answers, with the idea that answering the questions incorrectly would result in people learning more about insulin management (and then getting similar questions in the future correct).
Good design up front is critical. You could put all sorts of work into this only to discover that your results are useless. And you need to assess whether consent is needed (which it doesn't sound like it, but consent can be tricky). Ask yourself what the maximum amount of your time this could possibly take and write that down. Then multiply by 5. That's how long it will actually take. You think I am joking. Ask anyone here.
I don't think you're "disagreeing," but rather agreeing that in order for this to count as research (QI or otherwise) you need to put in the appropriate work up front so that your survey will actually test the hypothesis that you set out to test. I just spent 3 years completing and publishing what I thought was a straightforward QI project, and had the paper under review across 5 different journals for almost 2 of those years. I thought that I had done the appropriate steps up front to stand up to reviewer criticism. I was wrong, and I'm honestly very lucky to have gotten this paper into a low impact journal at all.

As an aside, calling it QI doesn't mean you "get around" the IRB. You probably get to have some sort of expedited review or are determined to be "IRB exempt," but you still have to put together the protocol and attain that exemption. Some institutions will actually have rules on how you can publish QI work, or whether you can call a QI project "research"--for example, my IRB exemption letter specifically said that I had to say the project was "not research" in my consent. One reviewer for my project actually noticed that my consent (which was included as a supplement) had this language stating that the project is not research and questioned whether or not I could publish the results, and I was saved only by the fact that I was able to produce the IRB exemption letter. It does not preclude publishing your results, but you just have to be careful how you discuss in the publication. Regardless, getting through IRB in some form is an additional step that you have to account for, and then you need to make sure you play by the rules for QI projects.

The bottom line is that if you want to publish a QI paper, you need to be incredibly rigorous up front. You can't pull numbers out of thin air like calling "<75% correct" as "underprepared" and think that is going to pass muster. Yes, the process is long and difficult to get follow through. Without going through that process, again, it might still be a meaningful activity and worth doing if the OP is passionate about it, but I would not call it "research."
 
Last edited:
  • Like
Reactions: 1 users
Agree with those who said survey writing is tough. Now that anyone and their grandmother can go on surveymonkey, people think no big deal. I took a course in survey design and it surely is not that straightforward.

I think this is good for a QI project, again comparing responses before and after the educational intervention. As others alluded to, there are tons of biases and confounders here, but your goal is to increase knowledge. If you can show you do that, it's worth something. Just be careful with conclusions that you make during your analysis.

I head a QI curriculum at my institution and I agree this could fall under the QI category. I'm not sure it technically would fall under research per se, but it certainly sounds like you could push it that way depending on how you plan it and undertake the project. However, just because it doesn't fall under hypothesis driven research, it doesn't mean it isn't reportable/publishable.

Tons of ways to approach this. You could for instance provide the educational intervention to some providers and then compare the responses in the educational intervention group vs the non-intervention group. Again, biases and confounders to overcome, but with that, it starts to go in the direction of research. That's more in the realm of proposing a standardized educational intervention that could be applied to other states/geographic areas.

If you were looking into childhood fatty liver treatment for instance, you might like to see if a year from now, patients are better treated or if clinical outcomes have changed, such as improved overall LFT values. Again, further toward research, but A LOT more work. It sounds like you're not interested in those things, but who knows, if things go well, you could always continue with the project and spin it as more of a research project.
 
  • Like
Reactions: 1 users
Another thing to consider: You compare 'cohorts' so to speak. You can pick this apart in your survey by asking certain questions. I DO NOT intend to open up any cans of worms here, but you could design a survey study on "I hypothesize that NPs have better nutritional health knowledge than physicians". With that, you could start to compare if educational exposure results in differences in knowledge and possibly practice. Again, you REALLY have to think about controlling for things. For instance, if you look at a np 10 years out vs a physician first year out, then you might not be able to make the best conclusions. You would want to start to compare/correlate multiple questions such as "Year of practice since school", with "Whether physician or NP" with answers to "knowledge and practice" questions.

Things can get complex really quick. Again, as someone else pointed out already, you should have hypotheses, questions, and what you plan to compare/look at planned out meticulously first before any survey or intervention is sent out. Looking at trends based on returned surveys and then retrospectively figuring out 'hypotheses' is bad science. That's in essence an observational reporting study faked out to be a real hypothesis driven study.

Why shouldn't you just do that (the retrospective thing)? Well, several reasons. One is that is if you plan ahead and KNOW what your hypothesis/questions are, you can better control for biases/confounding. If you KNOW you want to test the hypothesis I laid out as an example and you were surveying a large cohort, you would have the questions "If you are NP, have you experienced any medical school education?" and "If you are a physician, have you ever experienced nursing education/taken nursing courses?". If you aren't thinking of it ahead of time and don't ask those questions because you have based 'hypotheses' on results rather then thinking ahead, then you have potential bias built into your results.

Anyway, take a look at research studies in general. Here is a place where you can start.

 
Last edited:
  • Like
Reactions: 1 user
I don't think you're "disagreeing," but rather agreeing that in order for this to count as research (QI or otherwise) you need to put in the appropriate work up front so that your survey will actually test the hypothesis that you set out to test. I just spent 3 years completing and publishing what I thought was a straightforward QI project, and had the paper under review across 5 different journals for almost 2 of those years. I thought that I had done the appropriate steps up front to stand up to reviewer criticism. I was wrong, and I'm honestly very lucky to have gotten this paper into a low impact journal at all.
Supporting my "five times longer than whatever you think the timeline will be" quip!
As an aside, calling it QI doesn't mean you "get around" the IRB. You probably get to have some sort of expedited review or are determined to be "IRB exempt," but you still have to put together the protocol and attain that exemption. Some institutions will actually have rules on how you can publish QI work, or whether you can call a QI project "research"--for example, my IRB exemption letter specifically said that I had to say the project was "not research" in my consent. One reviewer for my project actually noticed that my consent (which was included as a supplement) had this language stating that the project is not research and questioned whether or not I could publish the results, and I was saved only by the fact that I was able to produce the IRB exemption letter. It does not preclude publishing your results, but you just have to be careful how you discuss in the publication. Regardless, getting through IRB in some form is an additional step that you have to account for, and then you need to make sure you play by the rules for QI projects.
Yeah, this is a complicated mess. At my shop, the IRB simply exempts all QI work which seems silly. And we've had the same issue, needing the exemption letter to get it published.
The bottom line is that if you want to publish a QI paper, you need to be incredibly rigorous up front. You can't pull numbers out of thin air like calling "<75% correct" as "underprepared" and think that is going to pass muster. Yes, the process is long and difficult to get follow through. Without going through that process, again, it might still be a meaningful activity and worth doing if the OP is passionate about it, but I would not call it "research."
I agree, but I think that you are naturally setting the "research bar" higher than most, as you have a big part of your life doing research. For the vast majority of IM residency applicants, they have been involved with some sort of research project and have a local school poster presentation. This is the type of project that can meet that standard. There's no peer review for school posters (usually), so you can put any type of project on one. Will research tracks or research-heavy programs be impressed? Certainly not. But it's much of what I review in apps. And honestly I'd rather see this (a self designed project done by the student) than just being a cog in someone's research machine and getting a minor author credit. But that's me. Some places just add up pubs.
 
  • Like
Reactions: 1 user
I'll ask the dumb questions:

What does QI stand for?
What does IRB stand for?

Thank you.
 
I agree, but I think that you are naturally setting the "research bar" higher than most, as you have a big part of your life doing research. For the vast majority of IM residency applicants, they have been involved with some sort of research project and have a local school poster presentation. This is the type of project that can meet that standard. There's no peer review for school posters (usually), so you can put any type of project on one. Will research tracks or research-heavy programs be impressed? Certainly not. But it's much of what I review in apps. And honestly I'd rather see this (a self designed project done by the student) than just being a cog in someone's research machine and getting a minor author credit. But that's me. Some places just add up pubs.
You make an excellent point, the bar is definitely different depending on what kind of product one is aiming for. And as I said--whether or not something is publishable is not the end-all/be-all that should determine whether an activity is worthwhile.
I'll ask the dumb questions:

What does QI stand for?
What does IRB stand for?

Thank you.
Someone has to ask them :)

QI=quality improvement
IRB=institutional review board
 
  • Like
Reactions: 1 user
I agree, but I think that you are naturally setting the "research bar" higher than most, as you have a big part of your life doing research. For the vast majority of IM residency applicants, they have been involved with some sort of research project and have a local school poster presentation. This is the type of project that can meet that standard. There's no peer review for school posters (usually), so you can put any type of project on one. Will research tracks or research-heavy programs be impressed? Certainly not. But it's much of what I review in apps. And honestly I'd rather see this (a self designed project done by the student) than just being a cog in someone's research machine and getting a minor author credit. But that's me. Some places just add up pubs.
I mean, I did a very similar project to this during residency. And I got a poster at a national conference out of it. And someone has since cited that poster (abstract) because a lot of people do things like this and then never get a publication out of them, so the publication that cited me was like 'all these people did this intervention on this topic, but none actually published their results' and then they discussed their intervention in more detail and their results.

But part of what 'we' should be doing is encouraging rigorous research experiences, not only to get the full understanding of what goes into research, but so that work is actually meaningful to the larger community. You can do that while also helping the student self-design a project (and that's how I approach working with trainees now :))
 
  • Like
Reactions: 1 user
Top