Lit Reviews, Understanding/Integrating Research

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

UhOh

Full Member
10+ Year Member
Joined
Mar 4, 2009
Messages
216
Reaction score
0
Hey SDNers,

I love y'all just in anticipation of the feedback I'm gonna get on this question 🙂. This summer my program involves intensive research/writing. My task is to gather a lot of research, comprehend, organize and integrate it into summary form, in a limited time. Over and over again. Fun 😉.

I've been struggling really badly because I'm just seeing inconsistent findings over and over again that I don't know how to explain so I don't know how to draw conclusions on the research. I'm also having problems simply summarizing research... I feel like there's so much to say about just one study & it's hard to say it. I have published papers, so it's not like I don't know how to do this, but I think the time constraint is maybe making this harder for me. Another issue is more basic - that I'm having problems just finding studies on the topic area. I do know how to conduct searches for literature though so maybe advice on the latter issue isn't really needed (?).

Any advice would be greatly appreciated! Hope y'all's summers are awesome! 🙂
 
I struggle with this as well at times. My biggest issue is that no matter how thoroughly I "think" I've searched the literature, I will always turn up articles I missed. Early on I was perhaps a naive perfectionist and believed you should read everything on a topic carefully before writing an article. A few years later...I don't believe anyone actually does this. Read enough to have a good sense of the literature. That doesn't mean to ignore papers you come across out of laziness, but it does mean that you could spend literally years chasing down papers from obscure journals that failing to include really won't impede your ability to do the research (unless its a meta-analysis of course). A related issues is where to draw the boundaries...these are often fuzzy. It is perhaps possible to do a thorough lit search on say "Factor analyses of measure x" but if you are doing something theoretical the boundaries are usually not clear enough to ever finish. I was ridiculously thorough in reading the literature for my thesis and am still uncovering articles or whole literatures I've never seen before.

RE: Conflicting findings - every topic has them. The ones that don't, it is likely due to publication bias, or it being so new there is only one study out🙂 All I can really suggest is do the best you can - realize that depending on the research area there are MANY things that can drive differences in findings, and often those things are not even at the level that they are reported in the journal article. This is why replication is so very critical and why it is usually more productive to look for themes than try to summarize individual studies.

Not sure if that helps or makes things worse, but thought I'd at least tell you that you aren't alone:laugh:
 
On a similar note, this article was in the LA Times on Sunday and covered by NPR today, and I'm interested in what you all might have to say about a question the article raises meatier than the title question:
http://www.latimes.com/health/la-sci-duh-20110529,0,725109.story

The point that sparked my interest is the idea of over-publication, where "publish or perish" results diluted databases, where you might search for months and wade through hundreds of articles but still be missing something crucially relevant, as the earlier poster mentioned.

I've personally worked in labs with opposite perspectives, one publishing extensively and the other rarely but selectively (both tenured). I'm split on the issue, primarily because it would lead to splitting hairs about what is "substantive" research, but it's interesting to think about.

Could there be a better way to delineate topic areas covered by each journal perhaps? ...though almost all articles could be considered to overlap multiple disciplines.

I hate to miss out on good research just because I couldn't think of just the right search term or it hasn't been cited by anyone else yet...
Thoughts?
 
It gets really fun when supposedly "standard" classifications are not so standardized from study to study. I'm careful to avoid definitive terminology for anything short of a law of physics. 😀 Breaking down journal articles and making sense of the data and their conclusions is very much a learning process.
 
Interesting article. I think it's also because you can't just make assumptions in science. My Masters thesis seems pretty obvious and my data matched exactly what I thought it would, but if I hadn't done that preliminary study anything I tried to study further on the topic area would be questioned with "Okay, where is the empirical evidence that this trend actually exists?"
 
Interesting article. I think it's also because you can't just make assumptions in science. My Masters thesis seems pretty obvious and my data matched exactly what I thought it would, but if I hadn't done that preliminary study anything I tried to study further on the topic area would be questioned with "Okay, where is the empirical evidence that this trend actually exists?"

+1. In my area of research, for a long time, people just assumed the opposite of what a lot of empirical research has found. In fact, our whole area of research was based on some questions that my PI's PI was advised to throw into a larger survey at the last minute. The results were so interesting--although the implications were kind of depressing--that it launched our whole body of research, which, 10 years later, has been supported by many, many large grants and yielded tons of publications (and hopefully aided our target population as well, even though the intervention phase of research in this issue is quite nascent--a lot of time was devoted to just understanding the issue first, which was a pre-resquisite to any intervention development).

I do think lit searches encounter a lot of problems, though--speaking as someone who literally searched through 3000+ abstracts for a systematic review (we had a lot of non-specific keywords). Unfortunately, even meta analyses and systematic reviews may not be as thorough as we might hope (I'd say more on this, but in the interest of a manuscript I'm writing, I better not for now).

Grouping by topic is really problematic, IMHO, though. Most of the work I've done could fit equally well in 3-3 different area, and it can be a bit of a struggle to decide what type of journal of target... I think more universal indexing would be helpful, though. Searching through several different databases can be really, really time consuming, especially because you get a whole lot of duplication crowding any unique results, and it can get m ind-numbing after a while.
 
RE: Over-publication, I find this an interesting notion. I can argue in both directions - that we overpublish, and also that we under-publish!

I just ran an updated literature search for a study I started in mid '09. The literature turned up 370 new articles published in that time frame that are at least peripherally relevant to this area, with about 50-60 being directly relevant. This isn't too obscure an area but there are definitely a limited number of people doing similar work - I imagine it would be far worse for folks in a broader area. As I mentioned earlier, I originally had some notion that it was possible to be completely thorough in a lit search, but that is unrealistic - there is simply no way to keep up on all relevant literature unless you have an overly narrow definition of relevant, which I expect would be a big mistake and slow progress.

On the other hand, its frightening to think that 20 people could have already ran this study and didn't bother publishing it because it didn't pan out. There is no possible way to know this, and it likely results in massive duplication of efforts and wasted time.

Some changed is needed, and I think it needs to happen on multiple levels:
1) Pub counting needs to stop. Impact needs to be defined in more diverse ways - things with lots of citations may not alter the field (especially considering self-citation), and things with few citations may be quite substantive - particularly in clinical areas. Even early-career people should be able to take on large, complex projects without fear that it will negatively impact their career. I've joked before that I'm jealous of folks who do exclusively survey research...I've worked on a few of these projects and the pilot-testing phase of my thesis took way more man-hours than some of those studies took from IRB to manuscript submission. Research area plays a big role in how "much" someone can publish, and I actually feel like the need for publications has discouraged me from pursuing certain areas at this stage. This needs to happen at a university and federal level, but I'm not sure how likely that is. On some level, it makes intuitive sense (its an objective measure of your "contribution") on another, it emphasizes just one aspect of being a good scientist.

2) We need better search tools. Cochrane has the right idea - we need more efforts like that in non-clinical areas. I'd actually like to see a nation (or international) wide catalog of studies, similar to what now is done for pharm trials so even if it fails it is still "known". Easily searchable, helps resolve the file-drawer problem, etc. Minimal peer review (maybe equivalent to conference abstracts) just to make sure these aren't just some undergrad throwing in their final project from research methods. This would especially help with my new pet peeve. Not to pick on social psych, but it seems especially common there..."mutli-experiment" papers. Many of these 3-experiment papers were actually 10-experiment papers with the results that didn't pan out dropped. Work like that carries no substantive meaning and harms the field more than helps it, but it seems to be increasingly common. Registering studies would force people to be a bit more honest

3) Journals need to make better use of space, better use of supplemental materials, more editorial/reviewer discretion and checking. Basically, moving to more of a "scientific" review than a manuscript review. I've kind of taken a dislike to some APA journals, because the intro is often unnecessarily bloated. Keep them short, put in in supplemental materials. Similarly, I don't think the current peer review system actually reviews the important stuff. Its more a review of the writing than of the science...a great writer can make a terrible study sound wonderful. We have papers that "I" don't believe the results of that are still published in absolutely fantastic places. There is often NO attempt to check all the little details necessary to make sure everything was done correctly, and little incentive for researchers to do so. Give me a concise report, but make the details accessible. If people start having to report and check over the details, it should slow the rate of publishing and improve the quality of the literature. I actually think this is the most critical piece, but its not going to be an easy transition.
 
Last edited:
Top