Dealing with dishonesty

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

Ollie123

Full Member
15+ Year Member
Joined
Feb 19, 2007
Messages
5,547
Reaction score
3,539
I was just wondering if anyone here had any suggestions or advice on dealing with dishonesty among research participants, particularly with regards to participating in studies?

With the economic downturn, I feel like motivation to participate in paid research has become progressively stronger. We seem to have far more people who are, frankly, desperate to qualify for our studies. People will call repeatedly and try giving different information on phone screens, have occasionally shown up in person to harass staff members, etc. This does not even begin to speak to people's motivation to comply with study procedures...when data should or should not be used has become a particularly thorny topic for discussion. We generally take an "Innocent until proven guilty" approach and only throw out those where non-compliance is blatant, but I suspect it has adversely affected studies on numerous occasions. Even just glancing through reverse-scored items that people clearly did not read correctly reveals that data integrity is a major concern in a number of studies.

I'm just wondering if others out there have had similar experiences and am hoping to generate a discussion on how best to deal with these issues. We have started requiring photo identification to confirm identity before participating, but I'm not sure this is ideal since it limits our external validity (i.e. many of the extremely-low SES will not have ID). We do not provide reasons for ineligibility when participants disqualify. Some studies have required participants to have stable contact information (i.e. no temporary shelters) because our ability to schedule/follow-up with participants was getting out of control. Again, not an ideal solution.

Anyways, any thoughts on the issue? Its a careful balance trying not to be overly rigid, but also not letting these problems affect the quality of the work. I'm aware of a number of intense, alternative approaches (i.e. shifting to a CBPR approach) but these are generally ill-suited to the kind of research that we do.

Members don't see this ad.
 
Last edited:
Yes, we have these issues. Always have, not sure if its really gotten worse lately.

1. We do not tell people why they are disqualified during phone screens. We are very delicate and nice and always give them loads of referrals and compassion if they dont meet.

2. Even if someone fits perfectly diagnostically, we don't necessarily take them. We always ask ourselves-Is this person going to be cooperative with the study protocol? Will they follow instructions or do they tend to have some underlying problems with authority that could lead them to cause more headaches than their data is worth?

3. We require ID as well. We rarely have people completely malinger, but exaggeration has no doubt been an issue. It might sound bad, but one of my supervisors favorite questions after we SCID is "Do you believe them?" (especially in regards to substance use/abuse, as it a rule-out for most things we do). Our supervisor has amazing confidence in us, and we if we even have a suspicion that they are either down playing substance use or "playing up" their symptoms...we will often eliminate them. Incongruence between observed affect and reported level of depression, and people who know the diagnostic criteria just a little too well always get a careful look at.

4. We keep an excel sheet with a list of the "trouble makers"...lol. They are on a do not call list. We have also had people try to weasel into to studies by calling back and givin a different name and reporting symptoms differently. However, since they are not told anything about why they disqualified the first time, they mostly don't succeed. There is such as thing as being too depressed for out studies too, so...
 
Last edited:
Wow. I'm surprised to read about these issues--we work with a VERY low-income population (median annual income around $11-12k, usually, with around a 20% employment rate), and I've never heard of us having these issues--in fact, usually we worry about not recruiting enough participants!

Also, I'm confused as to how CBPR could be an "alternate approach"--it doesn't directly effect recruitment/screening (unless you want your community advisors to help you recruit, but that still wouldn't impact eligibility requirements) but rather impacts the design and execution of the project (e.g., cultural tailoring of measurements and forms, IDEAS for recruitment strategies, etc). Maybe you're thinking of it in a different way then we think of it in our studies? I'm curious.

Interesting.
 
Last edited:
Members don't see this ad :)
Sorry, the reference to CBPR was more in relation to the problems with scheduling and attrition than eligibility - it has definitely been effective for those purposes. Its also been used as a way to gain access to certain populations not normally involved in research, but that wouldn't be our purpose.

Realistically, I was wondering if we might be able to use some of the concepts to at least help improve attrition without shifting to a full-blown approach (CBPR neuroscience research? We would at least be innovative!)
 
CBPR (and CBR, and PAR, and CBPAR) are all on a spectrum. Community participation may range from providing culturally-relevant advice and recruitment suggestions, to being the leader of the whole project (coming up with the "problem" and research questions, determining recruitment methods, data collection, etc.). The use of field liasons is pretty helpful in recruiting certain groups, although this technique is usually when there is concern about a low n, not too high of one!

A study I worked on had participants that were often having their phones disconnected and moved often as well. We still need them in our study, but the longitudinal follow-ups are tricky. Part of our initial recruitment interview involves them giving us information for THREE additional contacts that will always know where they live and how to get a hold of them. We also do frequent mailings so we know early on when someone has left. Seems to have worked pretty well so far, and it enables us to study a population without really stable living situations.
 
To clarify, we do still have recruitment issues - the problem tends to be that we get lots of interest, but a LARGE percentage of callers are ineligible (> 75% on many studies). Part of the motivation may stem from the amount of compensation...we pay a minimum of $20/hour. On some of the large projects, people can earn up to $500, so it is understandable that they would be desperate to qualify. On the other hand, we can't exactly do bad science and waste grant money just for the sake of providing these people an income.

erg - Have you run into problems reporting those procedures in manuscripts? I'm just wondering how that would look..."Participants were over 21, had normal or corrected-to-normal hearing and vision, and didn't come across as a jerk during the intake process...". Obviously an exaggeration but I'd be curious how you phrase it. Frankly, I'd like to do that in our lab and I think it would greatly improve the quality of the data we got, but we haven't figured out a "fair" way to get that through the IRB, scientific review, etc.

Thanks for the thoughts everyone, keep them coming!
 
Wouldn't this be a sign that you need to reconsider the compensation value for ethical reasons?
 
Perhaps worth considering, though it will be a careful balance. Our studies are relatively time-intensive (for the highest paying, participants attend 7 sessions each lasting 2-4 hours) and relatively invasive (blood draws, drugs, psychophysiology, genetics, etc.) so our payment is actually well within the "norm" for what similar studies pay - in fact, I suspect the IRB would give us a hard time if we tried to reduce it. Plus, by reducing it too much, it might then only be worthwhile to the lowest-income individuals, which obviously carries its own set of research problems and ethical concerns. I'm also not convinced compensation is the major factor...we see just as much lying, cheating and conniving to get into grad student projects ($20-30) as we do for the studies that pay several hundred dollars.

These are the sort of issues that scare me to death about research. Its rare enough for people to properly report data screening procedures (i.e. checked for outliers, examined residuals, etc.). You certainly never report what percentage of people were suspected to be filling out the forms honestly, how many people had questionable eligibility, how many people may have been in the study 2 or more times (for the anonymous survey researchers). All these things are taken for granted for the most part, and it terrifies me.

I'm going to end up turning to animal research!
 
Last edited:
Thanks for the info, bunch of stuff I've never really considered!
 
CBPR (and CBR, and PAR, and CBPAR) are all on a spectrum. Community participation may range from providing culturally-relevant advice and recruitment suggestions, to being the leader of the whole project (coming up with the "problem" and research questions, determining recruitment methods, data collection, etc.). The use of field liasons is pretty helpful in recruiting certain groups, although this technique is usually when there is concern about a low n, not too high of one!

At least in my lab, this alone wouldn't make for CBPR. The principal investigators (plus most of Co-Is, Co-Pis, and RAs) of all but one of projects are members of the populations we study, but we wouldn't consider a project to be CBPR unless we had community advisors (who are not also researchers--the distinction may get a bit "fuzzy" at times, though) participating throughout the project.

One interesting idea for recruiting hard to reach groups is Respondant Driven Sampling (RDS).
 
Last edited:
erg - Have you run into problems reporting those procedures in manuscripts? I'm just wondering how that would look..."Participants were over 21, had normal or corrected-to-normal hearing and vision, and didn't come across as a jerk during the intake process...". Obviously an exaggeration but I'd be curious how you phrase it. Frankly, I'd like to do that in our lab and I think it would greatly improve the quality of the data we got, but we haven't figured out a "fair" way to get that through the IRB, scientific review, etc.

Thanks for the thoughts everyone, keep them coming!

Its is not the IRB's job to dictate the inclusion/exlusion criteria for your study...no one has a right to be in your research study, if they do qualify diagnostically.

We obviously don't report that one of our exclusion criteria is people who may be "oppositional to some study procedures, thus causing us headaches" but I think this is pretty well covered under our right to excercise "clinical judgment" in our evluation of potential research subjects. "Appropriateness for study" (in reference to either psychiatric stability/acutness, or behaviorally) is a common term used I belive.
 
Last edited:
I'll have to think about that phrasing - it might be worth implementing in my own research down the line even if we don't use it in my current setting.

I would also like to invite you to chair our IRB. There seems to be no limit to the number of things they insist on sticking their noses into, and I have a really strong feeling that they would take exception to that.

I'm convinced it has just become another useless bureaucratic mess, and many of their concerns seem to have little to do with actual ethics.
 
Top