Data manipulation, unreproducable results, etc etc.

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

DendWrite

Full Member
10+ Year Member
Joined
Oct 19, 2008
Messages
333
Reaction score
1
I'm sorry to make another thread like this, but I feel like I really have to (and hope to garner input from some current / completed MD/PhD students about this).

1) Excluding negative data.
Yes, I know there are legitimate reasons to do this -- sometimes the experiment really doesn't work right, etc. But what about those cases where you are just excluding data based on what fits your hypothesis?

2) Running "representative Western blots / real-time PCRs / etc etc etc experiments"

Representative results aren't real results, and you're doing a disservice to yourself and other scientists reading your paper by presenting them as such.

-Some specific things that I have witnessed happen in the lab that I work (which is at a top-10 medical institution ... not bragging, but just to mention how crazy it is that this stuff happens by people who are supposed to be "the best"):

1) Deleting echocardiographic data from the machine that causes the results to go the wrong way and printing out the "correct" data

2) Running western blots with repeated samples (that are declared as unique samples) to reduce variability and make results statistically significant

3) Reprobing the same western blot with a "different" antibody to the same protein to try to get the bands to look better (along with decreasing exposure time / other trickery).

4) Performing densitometry of western bands treating each band differently in order to eke out differences (that are less than 10%) and make them statistically significant

5) Using controls for other experiments (that are "known" to work) in order to exaggerate observed effects

6) Running an experiment with the same SINGLE sample from each group three times, and claiming that n=3.

If you ever talk about people who photoshop images that appear in journals, people are so aghast with horror, yet when you re-run an experiment with "only the good samples," it's fine. It's like, if we can fake data using a scientific technique, it's fine, but damn it, if you use technology, you are a traitor to the field and must be banished.

I don't know what to do. I've worked my ass off in the lab and will have a few second/third author papers to my name by the time I graduate, so I'm set to get into an MD/PhD program (and prestige largely doesn't matter to me...I just want someplace with a supportive mentor who is willing to let me work alone and screw up until I figure out how to do things the right way, by myself). But these experiences have turned me off so much from research that I'm not sure I want to pursue it anymore.

Another thing is, research essays and interviews. You are expected to sound all gung ho about research and excited; yet what I really want to talk about in an interview is how it seems like so few scientists I've met really seem to have integrity when it comes to data analysis (and that the ones that are most successful and most heavily funded are sometimes the most suspect). But I don't think this will really fly with most of my MD/PhD interviewers who will likely see me as a spoiled college student who has never had to choose between being honest and not eating / getting fired from a faculty position and fudging some data / getting a grant funded.

Again, maybe I've just had some bad experiences and this isn't universal, and I could be perfectly happy doing science. But in my experience (and talking with a few friends who have done research in different departments), I'm not alone in this.

Any input? I just don't want to get to my PhD years and discover that in order to "get out" and back to my 3rd and 4th years that I'm going to have to put out B.S. studies like some of the ones I described above. It's not worth it to me...because not only are the studies themselves entirely misleading and a detriment rather than a contribution to science and their field, but also because they are absolutely MEANINGLESS. I can't imagine spending 3 or 4 years of my life produce something that is devoid of meaning and largely insignificant.

Members don't see this ad.
 
The best advice I can give you, DW, is to choose your mentor wisely. Integrity in the lab starts from the top. When you start your lab rotations, pay close attention to what is going on around you, just as you already are. If you have a bad feeling about a PI or a lab, that should be reason enough to look elsewhere, no matter how famous the PI is, or how prestigious the institution is, or how much grant money that lab has.
 
The best advice I can give you, DW, is to choose your mentor wisely. Integrity in the lab starts from the top. When you start your lab rotations, pay close attention to what is going on around you, just as you already are. If you have a bad feeling about a PI or a lab, that should be reason enough to look elsewhere, no matter how famous the PI is, or how prestigious the institution is, or how much grant money that lab has.

Great advice. Sometimes lots of money and a big name is not the best situation.
 
Members don't see this ad :)
Yes, definitely choose your mentor wisely. I think it's OK to ask them, in a polite and tactful way, whether or not they agree with your views on things. There was actually a recent story on NPR about falsification/manipulation of data and about how alarmingly prevalent it is. Many journals are aware of your concerns and fewer and fewer are accepting representative anythings, unless you've got some quant to back it up. Other things you listed are outright fraud and shouldn't be happening anywhere.

Leaving data out sometimes bothers me too, but I definitely think it's OK if you know there's a technical reason not to include it, ie I accidentally made some solution or prep incorrectly. I've run into the situation in my own research where I've gotten puzzling results despite good technique. So puzzling that it really didn't affect the hypothesis, since there was obviously something I didn't think of, ie another pathway. It didn't affect my conclusions and was actually just the basis of another paper. Negative data seems to be a term that means different things to different people. It's much easier and more convincing to provide evidence that "something happens" as opposed to "nothing happened."

Basically, despite the objectiveness of science and research a lot of it still comes down to good judgment and integrity. I think considering p values of <0.05 is silly. We're letting ourselves off way to easy, but I digress. :)
 
I've run into the situation in my own research where I've gotten puzzling results despite good technique. So puzzling that it really didn't affect the hypothesis, since there was obviously something I didn't think of, ie another pathway. It didn't affect my conclusions

i've only worked in basic science, and this sums up negative data for me :laugh:

with regards to "representative data", a lot of top tier journals now require you to send in all the raw, unedited, uncropped scans of western blots with your manuscript. it's a pain in the butt and translates into several dozens of hours just scanning autorads, but it definitely weeds out the negative motives for "representative data", because in theory, you DO want to show pretty, clean data. my last 2 nature papers required this.

edit: i do agree that a lot of what the OP wrote is indeed wrong, and sadly, prevalent. but it takes one error in judgment to end an entire career, so people who do it are basically gambling their careers away. publish a top paper only to have to retract it a year later due to intentional data manipulation = no future in science.
 
yet what I really want to talk about in an interview is how it seems like so few scientists I've met really seem to have integrity when it comes to data analysis (and that the ones that are most successful and most heavily funded are sometimes the most suspect).

We get plenty of applicants who are actually excited to talk about their research, so I imagine your unique approach will not go well. If I were interviewing you, this would flag you as a likely dropout rather than an insightful realist. You will have plenty of time to complain about sketchy data, incompetent reviewers, low pay, long hours, uncaring advisors, etc. after you actually matriculate.
 
We get plenty of applicants who are actually excited to talk about their research, so I imagine your unique approach will not go well. If I were interviewing you, this would flag you as a likely dropout rather than an insightful realist. You will have plenty of time to complain about sketchy data, incompetent reviewers, low pay, long hours, uncaring advisors, etc. after you actually matriculate.

I sort of made that comment in jest -- obviously I have enough tact not to rant about this sort of thing during an interview. I truly am excited about research, and I've spent my fair share of long hours in the lab (not low pay -- NO pay) already, and I'm still up for this. My ONLY complaint (and concern) is that perhaps such bending data is so prevalent that you almost HAVE to make your results look "better than reality" in order to keep up with the pack and publish papers / impress a thesis committee. Fortunately, from a lot of the replies to this thread, it sounds like that's not the case.

As far as excluding data: I totally agree. I'm not suggesting that 100% of data collected should be used in an analysis. A sample that has a value that's +/- 2 S.D.'s from the mean is likely messed up, for instance. My main objection is people who have the mentality that "Oh, the data did not fit my hypotheis -- thus, the experiment must be flawed," not -- the data isn't adding up, maybe I need to rethink my hypothesis.

I also agree about p-values. I think it's hilarious to hear people say p < 0.05 therefore IT IS TRUE without looking at sample size, experimental design, anything else. What's more, the second that you start excluding data to tip the scales in your favor in any way, that p-value is MEANINGLESS.

I think that in the end you all have given good advice and that I just need to keep my head down and get used to this sort of thing (as it's going to continue for the next ~15 years of my life, at least).
 
1) Deleting echocardiographic data from the machine that causes the results to go the wrong way and printing out the "correct" data

2) Running western blots with repeated samples (that are declared as unique samples) to reduce variability and make results statistically significant

3) Reprobing the same western blot with a "different" antibody to the same protein to try to get the bands to look better (along with decreasing exposure time / other trickery).

4) Performing densitometry of western bands treating each band differently in order to eke out differences (that are less than 10%) and make them statistically significant

5) Using controls for other experiments (that are "known" to work) in order to exaggerate observed effects

6) Running an experiment with the same SINGLE sample from each group three times, and claiming that n=3.

Any input? I just don't want to get to my PhD years and discover that in order to "get out" and back to my 3rd and 4th years that I'm going to have to put out B.S. studies like some of the ones I described above. It's not worth it to me...because not only are the studies themselves entirely misleading and a detriment rather than a contribution to science and their field, but also because they are absolutely MEANINGLESS. I can't imagine spending 3 or 4 years of my life produce something that is devoid of meaning and largely insignificant.

Good God man. I hope this is hyperbole, otherwise RUN from where you are and don't look back. Having your name on papers is not a prerequisite for getting into an MD/PhD program, although your chances will be significantly lower if it comes out these papers had to be recanted due to cooked data.

There is IMHO often some low level cooking due to a subconscious effort to get an experiment to work right- although I doubt it has an effect most of the time. What I mean by this is having 4 westerns, and picking the "best looking one" for publication. HOWEVER, what you are saying here is far from that- calling a repeat sample as unique is fraud, as is excluding samples that don't fit the hypothesis (although there is often a good reason to exclude a sample that is legitimate). I also can't believe your boss would label a protein with the wrong antibody and say otherwise on a publication. That's so wrong my head asplode.

Actually, what I would do is tell your boss to give you $$$ and a new Lexus or you will send a letter to the editor pointing out fraud.... ok, don't do that, if you do, you didn't hear it from me :)
 
There is IMHO often some low level cooking due to a subconscious effort to get an experiment to work right- although I doubt it has an effect most of the time. What I mean by this is having 4 westerns, and picking the "best looking one" for publication. HOWEVER, what you are saying here is far from that- calling a repeat sample as unique is fraud, as is excluding samples that don't fit the hypothesis (although there is often a good reason to exclude a sample that is legitimate). I also can't believe your boss would label a protein with the wrong antibody and say otherwise on a publication. That's so wrong my head asplode.

Unfortunately it's not hyperbole. To give credit to my boss (the head of the lab), he's a very upstanding guy, very supportive of me and really willing to have me work in the lab, and I owe him a lot. I think that if he knew about what was going on he'd certainly be opposed to it and would shut it down. The problem is that he no longer does the experiments (like a lot of established PI's) and it's all down to the post-docs that work under him. They are the ones who have been guilty of what I listed above.

The problem for me is that it's difficult to let my boss know about what's going on without totally alienating myself from everyone in the lab. It's not like people just make up data all the time, and they do know a lot and have taught me a lot and continue to do so, and I want that training. It's just that I see stuff like this happen and really start to question the validity of some of the projects. The reason that I'm staying is because I've been given my own project to work on now, and I figure that I can do it the "right" way -- even if that doesn't result in a pub at least I can feel good about it, or something.

It's a tricky issue regarding NOT getting my name on these papers, as I've contributed a LOT (in some cases over 50%). If I tell my boss I don't want to be on the paper, I'll need a reason, and if I say that "I don't think I deserve it because I didn't work hard enough" it won't fly, and he'll know that something's up.
 
Good God man. I hope this is hyperbole, otherwise RUN from where you are and don't look back. Having your name on papers is not a prerequisite for getting into an MD/PhD program, although your chances will be significantly lower if it comes out these papers had to be recanted due to cooked data.

There is IMHO often some low level cooking due to a subconscious effort to get an experiment to work right- although I doubt it has an effect most of the time. What I mean by this is having 4 westerns, and picking the "best looking one" for publication. HOWEVER, what you are saying here is far from that- calling a repeat sample as unique is fraud, as is excluding samples that don't fit the hypothesis (although there is often a good reason to exclude a sample that is legitimate). I also can't believe your boss would label a protein with the wrong antibody and say otherwise on a publication. That's so wrong my head asplode.

Actually, what I would do is tell your boss to give you $$$ and a new Lexus or you will send a letter to the editor pointing out fraud.... ok, don't do that, if you do, you didn't hear it from me :)

If people did not do that in my lab we would never publish anything. Reviewers want perfect data but variability in animal samples is usually high that sometimes you have to ignore some as failed treatments. If four out of six work it's usually trustworthy data IMO.
 
If people did not do that in my lab we would never publish anything. Reviewers want perfect data but variability in animal samples is usually high that sometimes you have to ignore some as failed treatments. If four out of six work it's usually trustworthy data IMO.

This is pretty much the reason I'm constantly struggling with my decision to stay in mostly chemistry labs. Chemistry and much of biochem are pretty straight forward result-wise: if you cannot reproduce the same result over and over, it doesn't exist.

To the OP: It sounds like your experience may be a bit extreme (running and never looking back may not be such a bad idea), but from what I understand it is somewhat the reality of doing research that is much more on the biology-side of things. So much is not understood about what you're doing, there are and infinite amount of things that need to be controller for that you often cannot.... and as a result there is a lot of room for interpretation.

In any case, NEVER get yourself in a situation where your name will be on something that may be uncovered as untrue. If it ever happens you can kiss your research career goodbye. There are no second chances.

And as others have said, its probably not a good topic for discussion during your interviews. It will make you sound overly-anal and unexperienced with the realities of research. While you may be absolutely correct about your experience, you're interviewers are going to assume that you're group is following practices that any others is.
 
If people did not do that in my lab we would never publish anything. Reviewers want perfect data but variability in animal samples is usually high that sometimes you have to ignore some as failed treatments. If four out of six work it's usually trustworthy data IMO.

That's what controls are for. And I would also say it's better not to publish than to publish crap. But I did say there are good reasons for omitting samples.
Having worked with animals, I agree with you that the variability is inherent, but that's why you need enough of a sample size to overcome it. If you only pick out the cases that fit the hypothesis, but throw out the rest, how do you know what you are saying is true? Part of the scientific method is having a falsifiable hypothesis. If you throw out samples that don't fit your hypothesis it is not falsifiable, and thus not scientific.
 
To the OP: It sounds like your experience may be a bit extreme (running and never looking back may not be such a bad idea), but from what I understand it is somewhat the reality of doing research that is much more on the biology-side of things. So much is not understood about what you're doing, there are and infinite amount of things that need to be controller for that you often cannot.... and as a result there is a lot of room for interpretation.

Room for interpretation is one thing- fudging results is another. If you are doing an assay and think that some of the results are too high and want to throw out experiments that are 2 sd above the mean, you should throw out those that are 2 sd below as well. That may be justified. But if you already "know the answer" and an experiment does not fit your hypothesis so you throw it out, you are not really doing an experiment at all. You are setting yourself up to be a bad scientist.
 
Members don't see this ad :)
That's what controls are for. And I would also say it's better not to publish than to publish crap. But I did say there are good reasons for omitting samples.
Having worked with animals, I agree with you that the variability is inherent, but that's why you need enough of a sample size to overcome it. If you only pick out the cases that fit the hypothesis, but throw out the rest, how do you know what you are saying is true? Part of the scientific method is having a falsifiable hypothesis. If you throw out samples that don't fit your hypothesis it is not falsifiable, and thus not scientific.

In a perfect world I would agree with you. Unfortunately, research is about time and money, so in a hypothetical situation where you have a choice of doing six more mice while excluding "statistical aberrations" to get to that P<0.05 instead of doing twelve more mice and including everything, I think a lot of people will pick the quickest option. Plus, if you show the data that did not work perfectly (treated animal looks like a control) even if there is statistical significance the reviewer is likely to jump all over it and ask for more experiments and more controls.
 
In a perfect world I would agree with you. Unfortunately, research is about time and money, so in a hypothetical situation where you have a choice of doing six more mice while excluding "statistical aberrations" to get to that P<0.05 instead of doing twelve more mice and including everything, I think a lot of people will pick the quickest option. Plus, if you show the data that did not work perfectly (treated animal looks like a control) even if there is statistical significance the reviewer is likely to jump all over it and ask for more experiments and more controls.

This is what I don't understand. How is excluding these statistical aberrations any worse than flat out making up data? As long as you have a reasonable hypothesis and obtained the data, say, twice, why not just say you got it 4 times and submit a paper? That's essentially what you are doing by "excluding" these other samples as far as the p-value goes. Yet when you suggest that, people jump all over you and are like "ethics, man, ethics." Yet when you just exclude samples it's like ...yeah, that's okay, maybe I could understand that.

I'm not criticizing you and I do realize that research is about time and money. But is everybody really cutting corners like this? Because if they are, it seems to me like research is as a whole really bad at what it posits to be useful for.
 
I've got to agree with others that excluding mice just because it gives you high p values is not kosher. If you exclude them because they got sick (bad animal facility) or if you accidentally gave one an air embolus, then OK, exclude it. Otherwise, you probably should've done a power calculation to help figure out how many you needed. The obvious consequence of arbitrarily leaving out data points, ie mice, is that your data starts looking weird. I don't really care if your p value is <0.000000001 if N=2. Obvious hyperbole, but the reviewer will probably jump on you anyway. Secondly, if you actually do manage to get it published as is, odds are it won't be in a respectable journal.

-X

In a perfect world I would agree with you. Unfortunately, research is about time and money, so in a hypothetical situation where you have a choice of doing six more mice while excluding "statistical aberrations" to get to that P<0.05 instead of doing twelve more mice and including everything, I think a lot of people will pick the quickest option. Plus, if you show the data that did not work perfectly (treated animal looks like a control) even if there is statistical significance the reviewer is likely to jump all over it and ask for more experiments and more controls.
 
I always thought that if all the readouts showed that treated animal looked like a control then the experiment failed and you had to repeat. I never saw people actually report those points in their data. Maybe I was taught wrong on this one but it seemed like a reasonable assumption to make.

I'm not criticizing you and I do realize that research is about time and money. But is everybody really cutting corners like this? Because if they are, it seems to me like research is as a whole really bad at what it posits to be useful for.
I think there will always be bad science but it is important to be honest with yourself about what you are doing and hold yourself to a high standard. With all the negatives there are still a lot of people doing quality work so it's not all gloom and doom.
 
Last edited:
I always thought that if all the readouts showed that treated animal looked like a control then the experiment failed and you had to repeat. I never saw people actually report those points in their data. Maybe I was taught wrong on this one but it seemed like a reasonable assumption to make.

I think there will always be bad science but it is important to be honest with yourself about what you are doing and hold yourself to a high standard. With all the negatives there are still a lot of people doing quality work so it's not all gloom and doom.

No, no, I definitely agree. If the treated animal looks identical to a control (and your treatment has worked before / you have good reason to believe it works), it's OK to exclude it. I agree that no one averages that data. What I'm referring to is, say you get a response that's 80% of what you get on average with treated samples. Or 120%. Even though +/- 20% isn't huge, if you have +/- 10% variation in your controls and the effect isn't that huge, it's could easily be nonsignificant, especially with small n.

It's like an experiment I did this week: three groups (two treated, one control), each group n=5. Lots of variability in each group. However, if I pick the best 3 from each group, the results are _perfect_ and highly significant. So the question is -- can I exclude the samples that "mess up" the results? In my mind, no, because it's an intermediate value between treated and non-treated, and since this hasn't been done before, who am I to draw a line? Drawing any line anywhere automatically assumes you know what the result is, meaning you don't need to do statistics, because damn it, you're RIGHT...p = 0.

I don't know...I have a -feeling- that it is real data and that when I do more samples this week it will turn out to be so. But I can't justify just presuming that and moving on ... that being said I'm much less experienced than a lot of you all, so maybe it's OK to do... no one has ever sat me down and been like "Ok DendWrite, this is how it works ... we know it's 'cheating' but everybody does it, so just do it to keep up."
 
In a perfect world I would agree with you. Unfortunately, research is about time and money


The "time" part needs to be emphasized. For instance, I've been working on a pilot experiment for 4 months doing two repetitions--each taking 2 months. In the first set, the controls (n=5) behaved just like the experiment group (n=5), but the other repetition (n=7 in both controls and exp) showed a very significant difference between the two groups. Each repetition alone lasts two months and involve a lot of surgery and treatments. Do I spend the time to troubleshoot so that every repetition would give me a significant difference or do I simply discard the first repetition which didn't agree with my hypothesis?
 
The "time" part needs to be emphasized. For instance, I've been working on a pilot experiment for 4 months doing two repetitions--each taking 2 months. In the first set, the controls (n=5) behaved just like the experiment group (n=5), but the other repetition (n=7 in both controls and exp) showed a very significant difference between the two groups. Each repetition alone lasts two months and involve a lot of surgery and treatments. Do I spend the time to troubleshoot so that every repetition would give me a significant difference or do I simply discard the first repetition which didn't agree with my hypothesis?

I think the moral of this story is not "ignore experiments that don't work" but rather "don't do these type of animal pilot studies as a grad student."

In addition, I would find a way to set up better controls if possible.
 
i think a lot of stuff boil down to whether you believe what you are doing. If you don't even believe your conclusion drawn from the results, then there is a serious problem.

Manipulation/faking of data happen in almost every lab, and their direct impact is the massive amount of bs that's in the literature today. and when ppl base their research on the some bs literature, their conclusions become unreliable as well. I think one thing that has to be addressed over and over again is our original purpose of doing all these experiments and the puzzles that are missing in the first place.
 
Manipulation/faking of data happen in almost every lab, and their direct impact is the massive amount of bs that's in the literature today. and when ppl base their research on the some bs literature, their conclusions become unreliable as well.

This is what is so troubling to me, I guess. Even if you are faking just a little bit off data, why are you doing it? Most likely to publish a paper. In other words, I can't see how it's any more egregious to simply make up every shred of data in a paper than it is to falsify enough data to make a "real" experiment publishable.

I read a paper like this http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1182327/ (basic conclusion: "Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias.") and see things in the lab with people publishing papers with data that I KNOW FOR A FACT has been doctored, and it's just sick to me. I don't mean to drag this out more than I already have, but it's so antipodal to the ethos of research...I mean, presumably biomedical research is ultimately seeking to improve the lives of patients suffering from some disease, yes? And that is partially what the NIH funds for this project are doing. Yet, in my experience, it's usually just a bunch of underpaid, chronically stressed academics, sitting around stroking their egos.

Another anecdotal thing that's happened (several times, but once recently). Sharing papers WITH THE ENTIRE LAB that the investigator is currently reviewing for a journal. Not for general interest, but because the unpublished findings have direct bearing on a post-docs project. I don't want to know if this particular "reviewer" had anything to do with slowing down the paper getting published or not.

In short, my current conception of the science I've witnessed so far is that it's all a farce. I don't say this to try to insult anyone's research projects, and indeed I'm hopeful that there is still good science out there. But the fact that I haven't seen any yet in my (admittedly rather short) "career" is very troubling to me, and it's made me strongly consider getting out of the biomedical research arena and doing something like a PhD in pharmaceutical chemistry (I know, there's another whole set of drawbacks to this path). Because it's hard to refute the characterization data of molecules available with modern analytical equipment. And if you don't believe it, it's easy to attempt to reproduce the reaction.

For instance: http://www.orgsyn.org/ ("In order for a procedure to be accepted for publication, each reaction must be successfully repeated in the laboratory of a member of the Editorial Board at least twice, with similar yields (generally ±5%) and selectivity similar to that reported by the submitters.") I'd love to see a similar restriction placed on Nature or Cell or Science papers. Along with your data and paper, send us your mice.
 
For instance: http://www.orgsyn.org/ ("In order for a procedure to be accepted for publication, each reaction must be successfully repeated in the laboratory of a member of the Editorial Board at least twice, with similar yields (generally ±5%) and selectivity similar to that reported by the submitters.") I'd love to see a similar restriction placed on Nature or Cell or Science papers. Along with your data and paper, send us your mice.

Why would you love to see that? You're kidding right? Yeah...I could see the editorial board repeating western blots. That sure would help. Half the guys on the editorial board haven't held a pipette in years.

Sometimes it takes an expert in a specific technique to get an experiment to work. I'm not talking about easy western blots...I'm talking about the kinds of experiments that get papers in Science or Nature...chip on chip, IP of endogenous proteins and MS/MS of the pulldowns, etc. Just because some other lab rat can't reproduce it doesn't mean it doesn't work. Now if multiple labs try a similar approach and NONE can ever replicate a published phenomenon, then they should publish those results. That's how controversial issues in science happen, and how progress is made, not by some rule that says that Mikey down in room 8096 has to be able to repeat my DNA sequencing before it is published.
 
While some of what you describe does sound sketchy, some of these actions are reasonable under certain sets of assumptions.

1) Excluding negative data.
As has been mentioned, a good control is worth it's weight in gold. Being able to produce a positive result from a sample that should absolutely be positive (i.e. specific anatomical expression of a gene product with a well-defined expression pattern) or a negative from a negative (i.e. RNase/Protease/sham probe of the same) is key to molecular experiments. If the control produces the same results as the experiment simply not working, it's time to rethink your approach. If your going to exclude data, I want experiment justification. A former co-worker of mine had to sideline months of data because she couldn't reproduce her animal results (later found it was due to very short biological half-life, redox state and adminstration issues, etc with her drug treatment). If you don't understand why your results are inconsistent, you can't toss them.

3) Reprobing the same western blot with a "different" antibody to the same protein to try to get the bands to look better (along with decreasing exposure time / other trickery).

You can't really fault antibody manipulation. There really is a lot of variability in how well they work, although Westerns are usually easier than IHC. If samples are appropriately paired on a single blot and a housekeeping protein is consistently expressed, I wouldn't worry too much.
 
Top