Accusations of peer-reviewed journals accepting articles without peer review/editorial favoritism

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

futureapppsy2

Assistant professor
Volunteer Staff
Lifetime Donor
15+ Year Member
Joined
Dec 25, 2008
Messages
7,641
Reaction score
6,377
I came across this series of interesting blog posts on Johnny Matson and some journals associated with him (primarily Research In Developmental Disabilities [RIDD] and Research in Autism Spectrum Disorders [RASD]). They also touch on Journal of Developmental and Physical and Disabilities (JDPD) and Developmental Neurorehabilitation (DN), and Sigafoos, O'Reilly, and Lacioni, a trio of ASD/DD researchers who publish together and seem to have gotten (get?) insanely quick acceptances (often within 2 days) to Matson's journals.

Basic points:
-RIDD/RASD had insanely quick turn-around times in general for accepted manuscripts from ~2010-2014 (often less than 10 days from first submission to official acceptance).
-Sigafoos, O'Reilly, and Lacioni (SOL) had even more insanely quick turn-around times (median 4 days; compared to a median of 65 days for a matched control set). Of the 73 articles Sigafoos published in RIDD/RASD in this time period, 43 were accepted within two days of initial recipient, with 13 being accepted the same day that they were submitted.
-These times would seem to indicate that the manuscripts are/were not actually being peer-reviewed.
-These authors (Matson + SOL) were publishing at extremely, arguably unrealistically, high rates in these four journals (*Each* author had 100-160 articles in just these four journals in four years).
-All four journals had at least one of these four as the editor or an AE during 2010-2014.
-Matson in particular published a ton of articles (almost 120) in RIDD/RASD alone during 2010-2014, when he served as editor of both journals.
-The turnaround times for these were, again, insanely quick (median of 1 day from original submission to final acceptance for his articles in DN).

The posts are here and include some great, comprehensive data as well as links to the complete datasets behind the analyses:

http://deevybee.blogspot.com/2015/02/journals-without-editors-what-is-going.html

http://deevybee.blogspot.com/2015/03/will-elsevier-say-sorry.html

http://deevybee.blogspot.com/2015/02/editors-behaving-badly.html

Thoughts? I'm all for quicker, efficient peer-review, but you aren't going to get that many reviewers that all review within 2 days that often. (Yes, I've done same day peer reviews, but even then, the probability of getting all three reviewers to do that, repeatedly, seems extremely unlikely. Especially when it happens more often for articles authored by certain people in supposedly double-blind journals)

Members don't see this ad.
 
Last edited:
  • Like
Reactions: 1 user
When you're using primarily volunteer reviewers, anything less than a week is pretty much impossible. I can't imagine many would agree to that strict of a timeline. Although the usual deadline I get of a month when I review seems a bit long. Maybe somewhere in the middle would work, I could swing 2 weeks. I think part of the holdup is also in the editorial process. Initial editorial review, assigning of reviewers, final editorial decision, etc. I've seen those processes hold something up for a couple months by themselves before.
 
When you're using primarily volunteer reviewers, anything less than a week is pretty much impossible. I can't imagine many would agree to that strict of a timeline. Although the usual deadline I get of a month when I review seems a bit long. Maybe somewhere in the middle would work, I could swing 2 weeks. I think part of the holdup is also in the editorial process. Initial editorial review, assigning of reviewers, final editorial decision, etc. I've seen those processes hold something up for a couple months by themselves before.
Yeah, which is why this seems so sketchy, especially because some authors have ancedotally reported getting no reviewer comments at all. (Btw, these aren't open access, pay to publish, or fly-by-night journals. All are pay-walled and well-respected in DD/ASD research [or they were]).
 
Members don't see this ad :)
I have been doing it wrong the whole time. I should be making up my results and publishing without peer review in journals where the editors are my friends.

It makes me wonder if non-clinical/counseling doctoral programs in psychology ever teach ethics.
 
I have been doing it wrong the whole time. I should be making up my results and publishing without peer review in journals where the editors are my friends.

It makes me wonder if non-clinical/counseling doctoral programs in psychology ever teach ethics.

Tbf, the studies were likely legitimate--just not actually peer-reviewed.

Full disclosure: I have a publication in RIDD that was "fast tracked," though *not* with any of the named authors. I remember emailing the first author when we got the acceptance and asking if it was indeed a peer-reviewed journal because the turn-around was so quick. She was surprised by the quick acceptance, too, but RIDD is one of the higher caliber/higher IF/better respected ASD/DD journals, so we just figured it was good luck, especially because our MS was, well, good (it's been highly cited since then). I stand by that article and the quality of our work and am more than a bit annoyed that it's being caught up in this probable fraud web that we had nothing to do with. If the allegation of what they same to be, those four authors have screwed over a lot of innocent scholars.

I do worry especially for the grad students/new faculty who have a CV full of RIDD/RASD/JDPD/DN publications with Matson and/or SOL that have gone from looking impressive to suspect through no fault of their own. This also screws over the people who published good, high-quality work in these journals in good faith. Like I said, these aren't predatory/academic spam journals--they have/had good reputations, good impact factors, and have published a lot of work others consider to be good , etc. These were considered some of the top tier journals for autism and DD research, especially for work that didn't fit within strict ABA journals like JABA and TAVB (and were arguably on the same or better level than those).
 
I believe this happens and I also think it is one of the problems with the peer review system. It really isn't blind a lot of times, and I wish there were more objective metrics for reviewer recommendations in most journals. I say that probably having several of my pubs accepted with less scrutiny than I expected.

Eta: i think peer review still happens but it is softer in a lot of cases than it should be. Some reviewers are lazy and/or some editors favor certain paradigms. The example above is extreme but I think there are less obvious versions of that happening out there.
 
Last edited:
Yeah, which is why this seems so sketchy, especially because some authors have ancedotally reported getting no reviewer comments at all. (Btw, these aren't open access, pay to publish, or fly-by-night journals. All are pay-walled and well-respected in DD/ASD research [or they were]).
I've published in Research in Developmental Disabilities before and the turn-around was fantastic/fast. That being said, we also got fantastic reviews back that made it very clear that they had read it, considered it, and provided good content feedback to us. This is contrasted to the 6 month review cycle that it just took for me to get another article's review back[Thank you yesterday, I finally got it back], or the lengthy 1.5 year post-acceptance wait for text publication for anther article. That being said, it does seem suspect to have that occur repeatedly and even my RIDD publication was about a 2-3 week turn-around.

I think that the issue here is one that extends beyond peer review and delves into the question of how effective the peer review process is. Not only from the standpoint of it not being 'blind' as you mention before, but from the fact that so often reviewers do not understand your argument, make idiotic recommendations, or are out to have an agenda. Too often reviewers do not provide a way to ensure that what gets published is quality. After all, how often have you gotten 2-3 reviews back which all clearly agreed with one another, were consistent in their feedback, and provided meaningful recommendations? If a lot, please tell me which journals lol.

My top favorite submission snafu's in the past couple of years include:
- an insistence by the editor that he would not send our MS out for review until we emphasized his theory of personality (instead of using the FFM) as the best way to contextualize our argument, offering us a chance to 'revise and resubmit' to him to do so. He even listed 4 citations of his that he said we should include. The article was a direct response to a question raised by a meta-analysis of the FFM and supported that contention; the meta did not address his points or cite him at all despite being in the same journal and the author being an editorial board member of that journal.
- Being told by one reviewer that a factor analysis of a 10 item instrument did not have a sufficient sample size with 400 people and needed "at least" 1000 in order to adhere to any modern standards
- told by one reviewer that validation of the FFM requires use of multi-trait/multi-method approaches, a non-typical approach given the FFM's lexical theory and its multi-trait nature (Five factors after all). From this, it was clear that the reviewer was not an expert in the theory in question and could not provide meaningful insight.
- told by one reviewer to examine a 1951 article (that, to date, has not been cited by anyone)
 
I've published in Research in Developmental Disabilities before and the turn-around was fantastic/fast. That being said, we also got fantastic reviews back that made it very clear that they had read it, considered it, and provided good content feedback to us. This is contrasted to the 6 month review cycle that it just took for me to get another article's review back[Thank you yesterday, I finally got it back], or the lengthy 1.5 year post-acceptance wait for text publication for anther article. That being said, it does seem suspect to have that occur repeatedly and even my RIDD publication was about a 2-3 week turn-around.

I think that the issue here is one that extends beyond peer review and delves into the question of how effective the peer review process is. Not only from the standpoint of it not being 'blind' as you mention before, but from the fact that so often reviewers do not understand your argument, make idiotic recommendations, or are out to have an agenda. Too often reviewers do not provide a way to ensure that what gets published is quality. After all, how often have you gotten 2-3 reviews back which all clearly agreed with one another, were consistent in their feedback, and provided meaningful recommendations? If a lot, please tell me which journals lol.

My top favorite submission snafu's in the past couple of years include:
- an insistence by the editor that he would not send our MS out for review until we emphasized his theory of personality (instead of using the FFM) as the best way to contextualize our argument, offering us a chance to 'revise and resubmit' to him to do so. He even listed 4 citations of his that he said we should include. The article was a direct response to a question raised by a meta-analysis of the FFM and supported that contention; the meta did not address his points or cite him at all despite being in the same journal and the author being an editorial board member of that journal.
- Being told by one reviewer that a factor analysis of a 10 item instrument did not have a sufficient sample size with 400 people and needed "at least" 1000 in order to adhere to any modern standards
- told by one reviewer that validation of the FFM requires use of multi-trait/multi-method approaches, a non-typical approach given the FFM's lexical theory and its multi-trait nature (Five factors after all). From this, it was clear that the reviewer was not an expert in the theory in question and could not provide meaningful insight.
- told by one reviewer to examine a 1951 article (that, to date, has not been cited by anyone)

I tend to think that peer review is like democracy--it's the worst system, except for every other one that's been created.
 
  • Like
Reactions: 4 users
Hempel, C.G. (1951). All ravens are black and, yes, the sun will rise tomorrow morning. Journal of Half-Full Glasses and Inductive Optimism, 23(7), 1-18.
 
oh, i just gotta know what that article is about.
It was about creating a "correlation tree" in order to examine the factor structure. The problem isn't that factor structures are based on correlation matrixes (which they are), but his methods which produce incorrect factor structures (partly because it doesn't use any type of rotation and makes assumptions about the othogonal nature of underlying structures). For giggles I did some EFA playing around with the data after the review and sure enough, not a bit of it held up. One should not rely on a citation from 1951 to guide your theory of factor analysis.
 
I tend to think that peer review is like democracy--it's the worst system, except for every other one that's been created.
I agree. I think the larger problem is that there has not been a lot of work to standardize what a review should be. Many editors do very little to ensure quality reviews. I think we say it is the best, but the only other system we have tried is not having one. Its flawed by lengthy reviews, useless reviews from people who clearly do not understand the content area, and 'anonymous reviews' by experts within a small field that you clearly know who you are being reviewed by. Not sure what a better system would be, but it could use a forum for discussion at APA some time*.

*I admit very little would be accomplished from such a forum because we as psychologist like to talk more than we like to take action.
 
Top