Research Methods, Trials and EBP

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

Tranquil

Full Member
10+ Year Member
Joined
Jul 27, 2010
Messages
201
Reaction score
0
This is my first trip into the board and I wonder if anyone would be interested in a general thread on research, trials and EBP. Just to set the scene I will make a first post.

1. Medical research is a difficult area because on the one hand it offers so much value and the other when things go wrong so much damage. One can point at huge advances made and the practical eradication of many diseases. However, things have gone wrong and one only has to consider thalidomide or say the use of cardiac anti-arrhythmic drugs which are known to have cost more American lives than the Vietnam war. (See "When Doctors Kill: why and how", by Joshua Perper and Stephen Cina)

2. It is also sadly true that in scientific research there have been many many cases of fraud or misconduct. In fact Professor David Goodstein from California Institute of Technology in a recent book shows that most scientific fraud or misconduct cases involve biological science with medical doctors disproportionately represented in these cases. (See "On Fact and Fraud" by David Goodstein, Princeton Press.

3. There are many reasons for what I have said in item 2 but for the purposes of this thread I will point out three and in subsequent posts elaborate on them.

Scientific Method - over centuries experimental methods and principles have been developed and these must be thoroughly learned and it takes a long to to learn and practice them. Indeed it is only when you do real work that its main points begin to sink in and getting to that point with humility is central to developing your research potential.

Ethics - in all science there is an ethical dimension and it has to be thought through with considerable care. In medical research it is paramount for obvious reasons. Indeed a large number of both fraud and misconduct cases can be traced to poor ethical standards.

Statistics - when one begins statistics it can seem quite easy but this is a false assumption and unless you really know what you are doing you can make horrendous blunders. These days we have SSPS and Excel so given a set of data one can generate a whole raft of statistics with zero effort. However, like any science, all statistics are hedged about with conditions and limits so interpretation of what you have been given is likely to be very hard EVEN if you are expert. Statistics is ultimately based on probabilities and everyone have difficulty in that area.

Sadly, the literature is legion with cases of scientific blunders because researchers do not understand what they are doing. For example, there are many cased where researchers confused correlation and regression, were selective in what data points they used, collected the wrong data and so on. So what we have to say here is NOT simple and if your are to get any benefit you will have to work hard. To give a simple example, suppose my risk of stroke is assessed as 12% and my doctor tells me that if I take a statin it will reduce my risk by 16% (we will not complicate it by adding in side effects). Almost no one outside of a numerate discipline can explain why your new level of risk if you take the statin is 10%.

It is also uncomfortably true that even the best researchers sometimes get over-confident, not to say arrogant, and try to go it alone and do not get advice from a competent statistician - that is unforgivable and may amount to misconduct.​

Members don't see this ad.
 
This is my second post on the theme of evidence based research and statistics and here I discuss two basic ideas which I shall call Research Style and Research Type. Please be aware I am INVENTING data here.

RESEARCH STYLE – at this stage you need to consider if your style is quantitative or qualitative. It is easy to confuse these two and simply think of them as describing data types but to do so means you are missing the whole point. In general, if you outcome is in some way intended to be predictive then your style is likely to be quantitative whereas if it is intended to be mostly descriptive then it is almost certainly qualitative. For example, suppose I decide to study infection rates after surgery.

Quantitative - here I would for simplicity chose two elements: the procedure and the infection rate. So over time and with a sample of patients I record relevant data. Now using this data I could process it statistically and predict say in knee surgery that the infection rate is likely to be 25% of patients.

Qualitative - knowing that 25% patients become infected after knee surgery is interesting but not of much use in deciding what to do about it. So my next research task could be to study surgical procedures used in knee surgery with a view to constructing a check-list which when used by the surgical team will lead to fewer infections. So here I am ending up with a description, as a series of questions (usually not more than 8), of what to do both at the start and end of an operation to minimise or prevent post-op infection.​

Thus you can see that the terms Qualitative and Quantitative are to do with the kind of OUTCOME you want not primarily the data itself - it is VITAL that you understand this point.

RESEARCH TYPE – broadly speaking there are two types; the first is interventionist where you deliberately make a change in a situation and then study its consequences and the second is observational where you simply record what is currently going on. Using the same example as above.

Observational - as for the above example, I do nothing except record which surgical procedure was used and post-infection rates. I don't interfere in any way and make no changes.

Interventionist - as for the examples above, let us suppose I develop the necessary check-list to be used by the surgical teams - that is my intervention, that is the change I make. Now I start recording the data exactly as before about procedure and post-op infection rates. Therefore at the end of these two studies I CAN decide if my intervention made any difference at all.​

You can also see in this example how studies can and in some cases must, as in this one, be linked.

1. Start by getting raw data on post-op infection rates
2. Next a study to develop a check-list
3. Finally, get the check-list into use and collect infection data again and see if there has been a reduction of significance (or of course an increase but we hope not). Here we might, indeed should, employ a statistical text of significance to be sure the change has made a real difference.

(If you are interested in check-lists in medical practice see the book "The Checklist Manifesto: how to get things right" by Atul Gawande (A surgeon) ISBN 978184 66831 38)
 
For your very last point, wouldn't it be prudent to have a control group in which you present a "take note of high rate of post operation knee infection" notice to the surgeons, to see if the checklist itself is really effective over just increased attention to the possibility of infection?
 
Members don't see this ad :)
For your very last point, wouldn't it be prudent to have a control group in which you present a "take note of high rate of post operation knee infection" notice to the surgeons, to see if the checklist itself is really effective over just increased attention to the possibility of infection?

I will discuss setting up a trial later. But you are right but having a control group is in some ways ethically difficult in this case. Suppose we know that the infection rate is 25% then we would hardly say let the control group be those who don't use the check-list and the trial group those that do just to check our figures.

So in this case we could take a random sample from clinical records where knee surgery occurred in the recent past and extract from it infection histories as well as infection control practices in use. That then is out control group and then the trial group is the one that uses the check-list.

In terms of increased attention one might be 'worried' that it is reducing the post operative infection rates and so if one is not very careful one might end up saying the check-list is more useful that it truly is and you exaggerate your findings. So in a trial one does as much as one can to make sure that IF there are any changes you can be fairly certain that they arose (remember they can be negative or positive) solely because of the intervention (the check-list). Notice in this case I suggested that we look at infection control practices. I do this NOT because I want to necessarily change them but they will allow me to moderate or qualify my results.
 
I have a psychology research background, not medical, so excuse my ignorance - is it not allowable in medical research to use a dummy control when it hasn't already been demonstrated that what you're investigating is effiacious?
 
I have a psychology research background, not medical, so excuse my ignorance - is it not allowable in medical research to use a dummy control when it hasn't already been demonstrated that what you're investigating is effiacious?

This is not a term I am familiar with but there is such a thing as a 'double-dummy'. For example, you want to try a pill with some and an injection with others and to make sure the intervention looks the same you give both the same with in each case a pill or injection as a placebo.

However, your question is slightly odd (I mean no offence!) because the reason we do a trail is to establish efficacy, ipso facto, because it has not already been demonstrated. In the case I mention earlier about knee surgery one could say that there was no check-list previously so in a way we have nothing to compare it with so I suggested we use clinical records so no 'real' control group had to be set up and so I created it - maybe one could regard that as a dummy?
 
Dummy I made up - can't think of the phrase I'm looking for. I'll just explain, no offense taken - was hoping you'd read my mind :).

Say someone has a new therapy technique. Typically, when a study is done, there isn't always just a "new therapy" group and a control "nontherapy group." The control is actually either an already established therapy or a nontherapy (just talking to a client without therapeutic techniques). This is because you want to establish that it's at least as effective as present therapy (in the first case) or isn't just the client-therapist relationship that is improving the client - it's actually the therapy (in the second case).

In the case with the checklist, it seems like you would want to establish that there isn't a mediator of the surgeon just acknowledging a high rate of infection and automatically doing his own thing to correct it. So in comparing the checklist to a "hey doc, remember that infection is prevalent following this surgery," you can establish that the reminders you're giving in the checklist are actually effective, and, if not, be able to try another set of reminders.
 
Last edited:
Say someone has a new therapy technique. Typically, when a study is done, there isn't always just a "new therapy" group and a control "nontherapy group." The control is actually either an already established therapy or a nontherapy (just talking to a client without therapeutic techniques). This is because you want to establish that it's at least as effective as present therapy (in the first case) or isn't just the client-therapist relationship that is improving the client - it's actually the therapy (in the second case).

In the case with the checklist, it seems like you would want to establish that there isn't a mediator of the surgeon just acknowledging a high rate of infection and automatically doing his own thing to correct it. So in comparing the checklist to a "hey doc, remember that infection is prevalent following this surgery," you can establish that the reminders you're giving in the checklist are actually effective, and, if not, be able to try another set of reminders.

You may be thinking of a placebo control group. Normally, as you say one compares one intervention with another because it is usually considered not entirely ethical to give treatment to one group and none (the placebo) to the other especially if there is some evidence of its efficacy.

Now in the knee case its hard to see how one could administer a placebo (it need not be a pill) that is why I suggested old patient records. One could I suppose use the check-list with one surgeon and not with another (or two different hospitals) but keep in mind that at this stage we don't know for sure it is effective; hence the trial. Interestingly a trial was carried out not just for knee surgery but for any surgery and one case was cited where the surgeon was very sceptical about the check-list idea until it was discovered because of the check-list (nurses usually call out the checks not the surgeon) that the replacement knee joint he was to use was the wrong size and instant conversion to the idea took place.
 
At this stage I think it is wise if I direct you to some reading on EBP (Evidence Based Practice)and RCT (Randomised Controlled Trial). The books in this area can be expensive especially when they are specific to a type of medicine or practice or type of practitioner. Almost any book can be obtained from Amazon and often you can find them as second hand. Another good site is one called abebooks.com and that is where I go if I know a book is long out of print or hard to get.

My three recommendations and ones I shall often refer to in future posts are are as follows where Ben Goldacre's book (also available as an eBook) is aimed at the informed but general reader but it is nevertheless excellent with clear ideas and plenty of case histories to reflect on. The other two are proper text books and intended for practitioners and students and cover EBP, RCT and research practices. Babu is a very compact book but a comprehensive read whereas that by Rubin has much more detail with careful discussion and argument. The ISBN as far as I know refer to the latest editions.

Goldacre, B (2009), Bad Science, Harper Perennial, ISBN 978000 7284870
Babu, A.N (2008), Clinical Research Methodology and Evidence-based Medicine: The Basics, Ansham, ISBN 9781905 740901
Rubin, A (2008), Practitioner's Guide to Using Research for Evidence-Based Practice, ISBN 9780470 136652

If you are looking for scholarly and referenced articles the place to go is MEDLINE, the National Library of Medicine's premier bibliographic database covering the fields of medicine, nursing, dentistry, veterinary medicine, the health care system, and the preclinical sciences. http://www.nlm.nih.gov/databases/databases_medline.html

Another way is to use reviews constructed by others and one of the most well known and highly respected are those produced by the Cochrane Foundation http://www.cochranfoundation.com/. The foundation has been at work for 30 years and is an international, not-for-profit organisation of academics, which produces systematic and systemic summaries of the research literature on health care, including meta-analysis.

One might note in passing that you are often told something in the news papers or you hear it elsewhere that some new cure or test has arrived or see some apparently well qualified person on TV telling you that "tests have shown" that pomegranate juice will cure everything from hair loss to vertigo or "Doctors have found that the time honoured practice of early use of antipyretic drugs to reduce fever may be counterproductive because they interfere with the body's natural response to infection". When that happens go to MEDLINE and see if any such trials have been done. In the same way don't accept what I say or anyone else here (because anyone can post here) without checking it out.

Goldacre states that there are at least 5,000 journals published every month and estimates there are about 15 million published academic articles. My best advice on this is to ask your various professors what they regard as the key journals and hopefully your University library will have some of them or certainly easy access to them. There are then plenty of tools to search for the kind of things you might be interested in and (some will cringe at this but I have never met an academic who does not use it every day) it is often useful to start with Wikipedia, not because you are going to quote from it as that would be frowned upon (or even punished), but often it gives you a very good reading list and therefore a kick start.
 
Last edited:
Randomization
This is a difficult idea to define because paradoxically if you can define it precisely you have in effect systemised it. However, the idea implies unpredictability or we might say no detectable pattern. In clinical trials we want to remove selection bias but the difficulty is to find a way of selecting samples randomly and it is much harder than one might think and the literature is replete with failed studies because the method of randomization was poor implying bias in sample selection therefore results cannot be trusted. Goldacre states that it is known from meta-analysis that dodgy methods of randomisation can overestimate treatment effects by 41%.

The idea of a medical controlled trial goes back to 1025AD and the brilliant work of Avicenna (Abu Ali Sina) in his famous work "The Cannon of Medicine". Goldacre though suggests that the first recorded randomised trial was carried out in the 17th century by John Baptista van Helmont who challenged the ‘theory' of the day and proposed a trial. To avoid any charge of cheating he divided the sample into two by drawing lots: half the patients going to Helmont and half going to others and the research question was starkly simple, and in Helmont's words "we shall see how many funerals both of us shall have!" (just as well he had no ethical committee to convince)

There are many ways to randomise but at the heart of most methods these days is a random number generator to begin the process. There is no algorithm as such for doing this, if there were the numbers would of course then be predictable but there are computer programs. Typically, what these programs actually is to sample the electrical signals in a circuit because there are always natural, unpredictable variations. Using these electrical signals as a seed or source we can generate a continuous sequence of random numbers.

Taking a Random Sample
In this section I describe a simple method of defining a random sample. Let us suppose you want to run an observational study on patients after cataract surgery. Scarring is a common post-operative side effect of surgery to remove cataracts and it is known that 30% need laser treatment one or two years afterwards because a membrane forms, a kind of scar tissue around the implant that gradually can obscure vision. However, let us further suppose that a new lens implant material becomes available and we wish to see if scarring is reduced. Now to do this trial you can assume the existing return rate is 30% but ONLY do that if it has been verified in clinical studies. Alternatively, just choose patients over a longer period, some will then have the old implant material and some the new and then you can compare the two data sets. Now 30% is a significant figure when you think of patient distress and although the laser treatment is simple it will still require two hospital visits; one to do carry out the procedure and one to follow up but in any case every intervention should be avoided if possible.

Define the population - start by defining the population and usually this is done by setting criteria and then estimating how many people or things that might be. For example, suppose I want to sample all patients after cataract surgery who went on to develop scar tissue that obscures vision and I estimate this to be 200 although in this case I might be able to get patient records to tell me exactly how many there are with the old implant materials and how many with the new.

Define a Sample Frame - this just means a list of some kind from which you will actually choose your patients points as usually the population is too big to study as a whole and in general a well selected sample will give you as much information anyway. In this case let us suppose I can get hold of patient lists and it might look as follows where I number each patient in the trail group (those with the new implant) from 1 to 100 (in this case) and do the same with the control group (those with the old implant).​
In fact as long as you number these lists systematically you can start and finish anywhere - so you could number the frame from 1 to 100, 200 to 299 or 87 to 186 etc. So I end up with two lists similar to the following.

Trial Group - 001 John Ashman, 002 Paul Brigham, .....,095 Janet Brown,...., 100 Anthony Zaccari
Control Group - 201 Lydia Taylor, etc.​

For the rest of the example I will use just one of these list but the principles are just the same when you include both groups.

Decide or Calculate a Sample Size - there are many ways to do this but just for example purposes let us say that I want to choose randomly 20 patients for my research study out of the 100 I have available in my sample frame. (There are many ways of calculating a sample size and any good statistical package with your hospital IS will have a process for doing that. There are even Iphone Apps such as Biostats Calculator at about $10 that will do all this for you as well as deal with all kinds of stats and tests.)

Generate Random Number - now I must generate 20 different random numbers between 1 and 100 (or between whatever systematic numbering for the frame you used). If the generator gives you the same number more than once just discard subsequent ones. Here I use the Iphone app AppBox Pro (a tool box of apps) and use the one called Random by telling it my range (1 to 100) and then pressing a click icon (or you can shake the phone) and it give me the list one at a time.

68, 33, 61, 89, 17, 24, 73, 80, 01, 50, 85, 92, 60, 95, 37, 72, 79, 21, 28, 11 writing out in order for convenience:
01, 11, 17, 21, 24, 28, 33, 37, 50, 60, 61, 68, 72, 73, 79, 89, 85, 89, 92, 95​

DO NOT be tempted to tamper with this list and say to yourself things like 72 and 73 cannot be 'right'. You MUST trust that the Iphone app has done its job and given you a random list. I warn you, more problems than you can imagine occur when people try to second guess these sophisticated random number generators - just trust them.

Select the Sample - now go through you sample frame selecting the patients based on these random numbers. Again BE WARNED do NOT try to second guess no matter how tempted you are. Thus we end up with our sample of 20 students.

001 John Ashman, 011 Paul Aldridge, 017 Victor Litchmore, 021 Gaetan Madhvani,....095 Janet Brown​

These are now your selected 20 sample points. If you wish and it might be wise, you can select a few more in case some refuse to take part.​
 
In this post I would like to make you aware of research pitfalls because they will help you look at what people say and allow you to exercise true academic scepticism. My advice would be to read through these ideas and then look through this or other discussion boards where people do introduce data and construct arguments and see if you can spot these poor practices - this exercise will be well worth doing. The ideas are common in all kinds of research and are unquestionably weaknesses of huge significance. I use the terms typical to the scientific community but they are of course not necessarily universal but if you want to consider these ideas (though not all of them are mentioned) with extended examples then read ‘Bad Science' by Ben Goldacre.

Cherry Picking/Cooking - this occurs when you are selective or very selective about the outcomes or the basic data so you only choose examples that support your particular case or stance. Roughly speaking if you fiddle the basic data we might call that ‘cooking' and if you choose only favourable outcomes from the processed data that is cherry picking. Engaging in either of these practices means you took a short cut and it amounts to dishonesty. A common example of this kind of flawed research is ignoring data from people who drop out of trials because it is much more likely they have done badly or had side-effects so they will make the drug look bad - so ignore them, make no attempt to chase them up, do not include them in your final analysis

Torturing the Data - "torture the data and it will confess to anything", as they say at Guantanamo Bay. Once you fix in your brain that a particular thing is true; you start seeing it everywhere in your data, you want it to be true. Ben Goldacre in his book ‘Bad Science' recounted a story about Richard Feynman, undoubtedly one of the finest, though somewhat maverick, brains of his day who started a lecture with a very salutatory story. If you cannot understand the point he is making you really do need to do a lot of reading and re-reading of research ideas.

You know, the most amazing thing happened to me tonight. I was coming here, on the way to the lecture, and I came in through the parking lot. And you won't believe what happened. I saw a car with the licence plate ARW 357. Can you imagine? Of all the millions of licence plates in the state, what was the chance that I would see that particular one tonight? Amazing....

Surrogate Outcomes - means inferring from one research outcome another. Of itself this might be a useful idea but if it is stretched then we may end up with anything. Ford cars have good engines therefore all cars must have good engines. In the medical line we might say a drug improves blood test results so must protect against heart attacks, lab studies on mice show that salmonella infused cancer cells stimulate an immune response that kills the cancer so this will happen in humans also.

Zero Alternatives – this is similar to ‘torturing the data' but it occurs when you decide what you want to conclude and only look for data that might support it; so in effect you do not consider alternative explanations of the data assuming that you must in fact be right. This is a very subtle form of malpractice because it can look like Popperian falsification; a very proper scientific methods, but without the necessary honesty with regard to what the data as a whole is telling you.

Hiding Methodology – part of the presentation of any set of data or result is to describe the methodology used; the research method, the research plan used to extract and process that data; without this information is NOT possible to have confidence in the outcome. Be honest, would you trust a research study outcome if the study owners refused to tell you how they got their results? Once we know the methods you can check for flaws or weaknesses - for example, in medical research there are the so called Jadad scores. According to Goldacre, studies which don't report their methods fully tend overstate the benefits of the treatments, by around 25% and that is practically fraudulent as well as possibly dangerous.

Authority - are you taken because the people who generate a claim are 'experts', well qualified so it must be right? Now of course we want to check on credentials but if we simply rely on those you will be making a big mistake. Sadly, the literature in almost every discipline it littered with well-qualified charlatans. By all means check on qualifications but don't fall into the trap of thinking that is enough for a result to be valid.

Journals and Review Sites – often in student work one cannot find a reference to a single reputable journal that has published a definitive study in the area under investigation. This is not a difficult task and most University libraries will have journal collection and there are review site such as the Cohrane Collaboration in the medical sciences.

Interpretation - in research it is often said that getting the data is easy, processing it hard and interpreting is where we give up and lie down in a dark room and hope the problem will go away. Finding meaning is always going to be hard work because:
Clarity - Results may not be all that clear, unfortunately, results may also be far too clear which should always make you think you have made a mistake (some things are just too good to be true).

Patterns - if you look at any set of data long enough you will find patterns, sadly it is all too easy to be biased or lazy or tendentious and look for what you want to see or even insert what you want to see.

Knowledge - finding meaning implies you need to be really knowledgeable, expert in your area and you have to be absolutely honest.​
Statistics - be very wary of statistics and always get an expert to help you decide what statistics you want and how to make sense of them - sadly this is often not done and serious blunders can and will be made by you if you don't really understand what you are seeing in the data or unknowingly perhaps have only a shallow understanding of the various statistical measures your SPSS package churns out. Be honest, most researchers are NOT statisticians so don't be afraid to ask for help at the start and end of a research project. Indeed one of the biggest blunders you could make is not to get good statistical advice as the start – let's face it, once the data is collected it's too late to change your mind and you may end up having to abandon the whole project because belatedly you realise the data is not suitable. Finally, be aware that you can process ANY set of data and derive a result but if it subsequently turns out there were fundamental errors in your choice of data then the project fails and you may well be humiliated and discredited because of it or worse your faulty results for example might show a drug to be safe when it is not leading to perhaps very serious human consequences.

Suppression - it is very tempting but also dishonest to suppress negative findings or findings you do not like for whatever reason and this can have serious implications say in medical studies and when you are discovered to have done such a thing your academic career is over. It may help you if you keep these two aphorisms in mind because they both point to the very worst in research: If facts do not conform to theory, they must be disposed of AND Researchers should always state the opinion on which their facts are based.

Over or Inappropriate Generalizations - this is just another way of making sure you understand the notion of not arguing from the particular to the Universal. That is you get one result and conclude it now applies everywhere and sadly it usually occurs when you are desperate to prove your point at any cost. A good example was created on a discussion board I saw recently where on member argued that because one historical event was true and had supporting evidence then every other one had to be true as well. To give a more mundane example, this faulty logic would lead you to say after research: Ford cars have good brakes, therefore Honda cars must also have good brakes - this might be true but it does not logically follow.

Localization – this occurs when you fail to or refuse to see how your logic should be generalised or to put it another way, saying in effect that the logic only applies where you say it does and nowhere else. For example, suppose I argue that two accounts of the same medical event differ therefore they are fabricated. This argument cannot just apply to these two events so the generalised form would be that when any two event descriptions differ they must of necessity be fabricated. As you can see the generalisation is obviously not justified because it fails to take into account that the events may later be reconciled or they may just view the event from different perspectives.
 
Last edited:
Trial and Placebo
The idea of a controlled trial, meaning we run it according to a strict and unvarying protocol is not new and you might be interested to know that the first known trial was about diets and dietary effects. It is recorded in the Old Testament where Daniel (Daniel 1:1-16) suggested the first ever clinical trial and it was simple his men would eat only vegetables and drink only water and the other set of men would eat meat and drink wine. The test then was see which group looked better and healthier after 10 days.

RCT (Randomised-Control Trial) Structure - So a trial requires two groups and assignment to each group is by a randomisation process where one group is called the control and the other the trial group. In this sense the control is what we compare against, the base line. Then we usually have an intervention and in most trials we compare two treatments and we are interested to know if there are any differences and if those differences on average are significant. For example, we might want to compare the effects of using Cimetidine (the control intervention) which has been in use since 1975 and a newer drug called Ranitidine (the trial group) and see how they perform in terms of eradicating ulcers and as you know it's easy to be certain because treatment success can be unambiguously recorded by having a look down there with a gastroscope.

So at the end of the trial we might consider treatment success rates and find that for instance in the control it is 50% and for the trial group it is 80% or vice versa or no difference at all although it is far more likely the differences will be of a much smaller scale and then the quality of your trial may make all the difference in giving you a clear result. You might also consider side effects and often of great interest are results that are outliers, for example one or two in the trial or control group may get significantly worse/better that the majority and if this happens you would for obvious reasons go back later and see if it can be explained – outliers might be because the trial protocol with some patients was not followed or it might be indicative of something that might make the intervention more useful/worse.

Caution - if during the trial it is obvious that one intervention is significantly better/worse, and this occasionally happens, you might decide to abandon the trial because it now serves no purpose and you may be denying patients an excellent treatment or giving them a poor one - be careful, this must never be done on a guess or feeling but on real, hard data. I will return to this when I speak about blinding in a later post.

Placebo Trial - Occasionally, one does a trial like the above with one set of patients given a placebo but in most cases it is not ethical (see the WMA Declaration of Helsinki, the international Ethics bible) to give one set of patients an intervention which may perhaps be known to work and leave the rest to possibly suffer given that you know the placebo (usually a sugar pill) has no pharmacological effect. So most clinical trials compare two treatments and generally they look at efficacy but also take into account relative costs as well as route of administration because if it turns out the treatments are no different, then this extra data is still useful allowing you to select the cheapest and/or the easiest to administer.

Ideally, it is nice to know that a treatment is at least better than placebo but there is an ethical element here that cannot be ignored. For example, if one is testing a new BP drug then it is hardly ethical to administer the drug to one group of high blood pressure patients and give nothing to the other group when you know they have the same, life threatening condition just so you can get some nice data. In such cases one must discus fully the protocol to be used with your ethics committee and also bear in mind that the control intervention may already be known to be better than placebo so if the trial drug is better than the control it is also better than placebo. As Voltaire said: ‘The art of medicine consists in amusing the patient while nature cures the disease.

You should note here that there is also a nacebo effect where people feel bad because they're expecting to. So as part of you trial you might give a drug but feel you must warn the patient of side effects: skin rash, breathing difficulties, vomiting, joint pain, jaundice and so on. So if you are not careful you will make them think the cure is worse than the disease and so bias the results.

Ben Goldacre made a very perceptive and pointed remark about placebo trails when he said "If anti-authoritarian rhetoric is your thing, then bear this in mind: perpetrating a placebo-control trial of an accepted treatment; whether it's an alternative therapy or any form of medicine is an inherently subversive act. You undermine false certainty, and you deprive doctors, patients and therapists of treatments which previously pleased them".

Protocol and Screening out - One final word, and it's an important word, is that we must screen out as far as possible all healing/negative effects except those for the two interventions. We have already seen that selection bias has been screened out by a randomisation process but it is now you have to be aware of the placebo effect and screen that out also. You must understand the placebo effect is not of itself bad and doctors would like to use it but simply do not know how it functions or how it can be administered and controlled. However, it can ruin your trial because if it is not screened out you will not be able to separate the intervention from the placebo effect rendering the trial worthless or at best unreliable.

A placebo is not just a sugar pill and dozens of studies have shown that almost anything can cause this effect. If we just consider tablets then a placebo effect can occur because of the colour of the tablet, packaging of the tablet, administration of the tablet, a smile or grimace of the nurse, telling the patients it is good for them and so on. Now you cannot entirely eliminate this but you MUST take it into account when designing the trial so that as much as possible to the patients and doctors the two interventions being compared are indistinguishable in the delivery route - remember the delivery route goes right back to the pharmacy and possibly further.

In summary you must work out the intervention protocol and as much as possible ensure that everyone follows it exactly. Please remember this might be very hard to do because nurses and doctors differ, one might feel happy or grumpy and these might end up as placebo or nacebo effects. It is essential therefore that care is taking in the design of the trail protocols and everyone involves is briefed fully and when the trail begins careful monitoring is essential. A very good idea is to design a checklist that everyone uses but medical staff are often resistant to this kind of thing believing that they ‘know' refusing tacitly to see that they by their attitude may compromise a trial.

Much of what I say here can also be found in Ben Goldacre's book "Bad Science", ISBN 9780007 284870 and you can also get it as an ebook. I think it is almost essential reading for all involved in medical research. Just a few days ago I was at a symposium on CAM and how well it does in terms of evidence based medicine but what surprised me was how few of the Medical Doctors present had any clear idea what a trial was and what evidence means. You might also be interested to know that according to professor David Goodstein, vice provost at Caltech, most scientific fraud occurs in the biological and medical sciences so again I advise you to read this book.
 
Hey Tranquil, I haven't posted in here yet, but thanks for taking the time to write out all this stuff. I love statistics and experimental design. Unfortunately, I've been pretty busy lately but hopefully, I'll be contributing to this thread within the next week or so. Just wanted to let you know that some of us do enjoy reading this stuff so you don't feel like you and organic are the only ones getting something out of this thread. :)
 
After some thought It seems better to place this thread under Research Forums and the sub-forum Student Research and Publishing as I think that might better bring similar things together.

To start off there I have posted a note on check-lists which in case you are unaware are of huge importance in medical and research practice. I will gradually transfer all that is written here into that thread.
 
Last edited:
In this note I will try to illustrate how the terms theory and hypothesis differ. To do this I shall use a recent article from New Scientist: "Low-power laser may keep blindness at bay", 16 January 2013 by Michael Slezak, Issue 2900.

Put simply, an hypothesis is a statement about something that may be true or false. For example, I could hypothesise that 'smoking increases the chance of getting lung disease.' My research task would then be to test this statement by looking for evidence in one way or another. Now, you may arrive at your hypothesis in many ways, commonly though: intuition, guessing or partial evidence. In medical science we might get and idea for a test to verify the hypothesis but obviously, in medical things we must at least be careful that what we propose won't do any harm.
Typically, with a hypothesis you don't understand why it is true, the mechanism. That is its one thing to suggest smoking causes lung disease and quite another to explain the causal mechanism.

On the other hand with a theory, we do have a suggested causal mechanism, an explanation. and such theories predict certain things which we can look for to verify the theory. For example with smoking we might predict that the causal mechanism is narrowing of the arteries and therefore we look for that evidence in patients. Now an example.

IMAGINE the horror of being told you are losing your sight and that nothing can be done to prevent it. This is the reality for millions of people with age-related macular degeneration (AMD). But a novel laser treatment for AMD gives hope that this leading cause of blindness in the West could one day be preventable. As we will see later, unpublished results from a pilot trial have left researchers scratching their heads as to exactly how it works, indeed, the findings also challenge ideas about the basis of the disease.

AMD corrodes the macula, a part of the retina with the highest density of photoreceptors. The disease leaves people with a gaping hole in the middle of their vision, making reading and recognising faces difficult or impossible. There is no treatment for the most common form of the disease (dry AMD), but drugs that slow its progression are available for the rarer, more aggressive form (wet AMD).

In most people, the condition starts with unusually large deposits of extra-cellular debris called "drusen" littering the retina. Drusen, which consist of proteins and lipids, are supposed to be cleared away by the retinal pigment epithelium (RPE) cells. But as those cells age they become less effective at doing that.

The exact cause of what happens next is not well understood. Either because of the extra drusen, or as a result of whatever is damaging the RPE cells in the first place, the RPE cells become starved of oxygen. As they die off, they stop providing energy to the photoreceptors, causing them to die too. This is a serious problem as the density of photoreceptors is highest in the macula, so any loss noticeably affects vision.

As early as the 1970s, there was some indication that laser treatment cleared away the drusen, but this did not come with an improvement in sight. In some cases, trials were even halted for fear they were making things worse. This is unsurprising as the lasers used were high energy and made visible burns on the retina. But today's more sophisticated, low-energy lasers offer more subtle options.

In 2010, ophthalmologist Robyn Guymer at the University of Melbourne's Centre for Eye Research Australia conducted a pilot trial with 50 volunteers who were in the very early stages of the disease, with some build-up of drusen. They each had treatment in one eye with a specially designed laser.

After treatment, the majority of the participants saw benefits - a reduction in the amount of drusen, an improvement in their sight, or both. In lab tests, some participants were able to notice small differences in the intensity of light indicating that the retina had regained some of its function. "The sensitivity of the retina improved in the spots that were most at risk of running into trouble," says Guymer. "There's been no other intervention where you can improve the function of a person's retina."

So why should lasering the already embattled RPE improve things? One THEORY is that the cells are so tightly bound that they never divide and regenerate, eventually becoming less effective at removing drusen. If the laser shot through that layer, killing some of the cells and breaking up the tight bonds, it may have allowed new RPE cells to be created. The laser should be able to do this because, rather than a uniform beam, it is made up of thousands of little beam spikes which turn its target into a pin-cushion. It kills a smattering of individual cells but leaves enough healthy cells in between to kick-start the regeneration of the RPE.

This cannot be the whole story, though. When Guymer conducted the pilot study she treated only one of each volunteer's eyes in order to use the other as a control. To her surprise, among those participants who saw a reduction in the drusen, most of them experienced the effect in both eyes. "It's a little hard to explain how the other eye is affected by the [rejuvenation] mechanism," says Guymer. It seems something else is triggering a response in both eyes.

Guymer reckons the immune system might be responsible. To protect the eye from potentially damaging inflammation caused by an immune response, it usually sits outside the immune system's radar. Unfortunately, this means that the drusen are also "in an immune privileged position", says Guymer, hidden by a tight layer of RPE cells. She thinks that when the laser kills some of the RPE cells, it effectively alerts the immune system to the presence of the drusen, triggering a double-whammy clean-up of the debris by both the immune system and the newly rejuvenated RPE cells. "That's not proven," Guymer stresses, "that's just the working HYPOTHESIS."

Philip Rosenfeld at the Bascom Palmer Eye Institute in Miami, Florida, is excited by the results and says the immune explanation is plausible. "You can come up with a lot of explanations but the most likely HYPOTHESIS is that something is being stimulated in the immune system that's been transferred to the other eye," he says.

Although still a long way off, Guymer sees a future where the new laser is used as a preventative measure in people at high risk of AMD, in a similar fashion to the way heart attacks are prevented by treating blood pressure. "Ultimately, if your parents had AMD, you'll have a genetic test and if you've got the gene you'll have this laser and you won't get the disease. That's where we would like to head. One lasering, perhaps once a year, for those who are genetically at risk," she says.

If Guymer had her way, this approach would be just the start. She imagines a time when people at genetic risk simply get vaccinated so their body recognises the drusen and clears it away. "We just need to trigger the immune system to do a better job cleaning up that debris."
 
Top