Trials, Evidence-Based Practice and Research

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

Tranquil

Full Member
10+ Year Member
Joined
Jul 27, 2010
Messages
201
Reaction score
0
I created a similar thread in the interdisciplinary forum but was wondering it is more appropriate here. If there is any interest I will copy that thread to here as there are so far only a few postings but I will add a new entry here so you can gauge the flavour of the thread. This post is based in Atul Gawande's book "The Checklist Manifesto", ISBN 978 1846 6831 38, Gawande is a surgeon and Associate Professor in the Department of Health Policy and Management Harvard School of Public Health. The idea of a checklist is simple but it is also an area worthy of a lot more research in medical setting.

Checklists – How to get things right?
Teaching and learning are both hard things to do, they require effort and perseverance in medicine often over a very a long time scale and when they don't work we often (sadly) look for excuses not remedies. Medicine is of course complex and for it to work lots of things and people have to fit and work together, so health care systems can just as easily go wrong as right. But go to almost any hospital in the world and it will be plagued by failures - missed subtleties, overlooked knowledge, and outright errors. Sadly, we often imagine that little can be done beyond working harder and harder to catch the problems and clean up after them – we figuratively shrug and say in effect medical science is a machine so complex no one can make it work perfectly. So we opt to ‘try harder` or to dismiss a failures as the failings of weak students, poor teaching, lack of resources instead of choosing accept our fallibilities.

Code of Conduct and Discipline
Is there a way forward? Consider first a definition of professionalism, a code of conduct. It is where you spell out ideals and duties. But they all have at least four common elements.

Expectation of selflessness - we who accept responsibility for others – whether we are doctors, lawyers, teachers, public authorities, soldiers, or pilots - will place the needs and concerns of those who depend on us above our own.

Expectation of skill - we will aim for excellence in our knowledge, expertise and practice.

Expectation of trust-worthiness - we will be responsible in our personal behaviour toward our charges.

Expectation of discipline - discipline in following prudent procedure/practice and most importantly of functioning with others.​
This last concept of discipline is almost entirely outside the lexicon of most professions, including medicine. In medicine just like education, we hold up "autonomy" as a professional lodestar, a principle that stands in direct opposition to discipline. But in a medical world in which success now requires large enterprises, teams of educators, doctors, huge investment in technologies, and knowledge that far outstrips any one person's abilities; individual autonomy hardly seems the ideal we should aim for. It has the ring more of protectionism than of excellence. Often the closest our professional codes come to articulating the goal is an occasional plea for "collegiality." What is needed, however, isn't just that people working together be nice to each other. It is discipline. Discipline is hard - harder than trustworthiness and skill and perhaps even than selflessness. We are by nature flawed and inconstant creatures. We are not built for discipline; we are built for novelty, excitement, instant gratification not for careful attention to detail. Discipline is something we have to work at.

Checklists
If we consider an industry where failure can have catastrophic consequences then we might understand why aviation has required institutions to make discipline a norm. The pre-flight, in-flight and emergency checklists began in the 1950s, but the power of their discovery gave birth to entire organizations to independently determine the underlying causes of failure and recommend how to remedy them. And we have national regulations to ensure that those recommendations are incorporated into usable checklists and reliably adopted in ways that actually reduce harm. To be sure, checklists must not become ossified mandates that hinder rather than help. Even the simplest requires frequent re-visitation and ongoing refinement. Airline manufacturers for instance put a publication date on all their checklists, and there is a reason why - they are expected to change with time. In the end, a checklist is only an aid. If it doesn't aid, it's not right. But if it does, we must be ready to embrace the possibility.

Without question, technology; can increase our capabilities. But there is much that technology cannot do: deal with the unpredictable, manage uncertainty or deal with a distraught patent. In many ways, technology has complicated these matters; it has added yet another element of complexity to the systems we depend on and given us entirely new kinds of failure to contend with. One essential characteristic of modern life is that we all depend on systems - on assemblages of people or technologies or both and among our most profound difficulties is making them work. In medicine for instance, if you want patients to receive the best care possible, not only must you do a good job but a whole collection of diverse components have to somehow mesh together effectively – but we know that having great components is not enough. Where in daily practice do we have someone swooping in to study failures, or mapping out the checklists, or agency tracking the month-to-month results? We don't study routine failures - we don't look for the patterns of our recurrent mistakes or devise and refine potential solutions for them.

There is no other choice. When we look closely we recognize the same balls are being dropped over and over, even by those of great ability and determination. We know the patterns. We see the costs. It is time to try something else. Try a checklist.

Types of Checklist
A checklist is a simple device but it needs to be used honestly and collectively. No one is exempt because no one is infallible. The trouble in medicine is we kid ourselves that we have everything in place but more often than not it is voluminous and treated with contempt and staff hide under a view of ‘academic pr professional judgement' or some other description that allows us to stand alone as the judge and usually do nothing. An answer is a checklist, a short and ever evolving set of things we always do and never skip and we do it as a team. They are essentially used:

Before we start – because once we start there may be no road back if a mistake is made or something essential is missing. If one removes a Patients knee joint and then finds the implant is the wrong size or it's damaged or part of it is missing there is no way back.

When we complete – because we have to ensure that nothing has been overlooked or missed as we hand on some artefact for use. You examine at patient in emergency with a minor burn but to be on the safe side you prescribe penicillin but because you are rushed you forgot to ask or assumed someone else asked were the allergic.

When something goes wrong – there is no point in apportioning blame at this point and the whole focus is on correction. The event may need immediate action or perhaps it can wait but in any event we must know what steps to take. You are mid way through a surgical procedure you have done a 100 times before when for some inexplicable reason you cut an artery. I guess you will know what to do but I wonder had you prepared for just such an eventuality by having blood on hand in sufficient quantities?​

Checklists are not a substitute for professionalism, knowledge and skill and the expectation is that they are always used by those who themselves are expert and knowledgeable. If this point is not understood then one ends up not writing a checklist but a text book – we have to assume that those who use a particular checklist are competent. Checklists are typically used at ‘pause points', meaning you deliberately slow down or stop to work through them. There are two kinds of check list:

Do-confirm – that is we ask that someone confirms that something has been done. In most medical situations we might use Do-confirm, for example at the beginning of a surgical procedure to confirm that everything is ready. One might note here that bit is not usually the surgeon who marks of the check points.

Read-do – the expectation here is that an unexpected event occurs so one selects and reads the relevant checklist and does what is says. For example, during surgery a patient has a sudden drop in blood pressure​

These two things mean that everyone knows what to do in a given situation and everyone does the same thing. There may be hundreds of checklists produced in your checklist factory but that does not matter, all that matters is that you can find the right checklist when you need it. The whole point is that you use the checklist when it is needed and you do it in concert with others. It is easy to see that this can and should become routine, automatic for everyone and it will save us from many mistakes and hence considerable amounts of money and anguish to say nothing of the effects ion patients of a mistake.

As a rough rule of thumb checklists need to be concise and on average not more than 10 questions (you must resist ANY temptation to make them longer) and those questions must be constantly under review. It is best if these lists are used collaboratively and though this may take extra time in the long run it make for better working relationships and less problem – essentially if someone is doing it with you it is practically impossible to shirk your responsibilities.
 
1. Medical research is a difficult area because on the one hand it offers so much value and the other when things go wrong so much damage. One can point at huge the advances made and the practical eradication of many diseases, however, things have gone wrong and one only has to consider thalidomide or say the use of cardiac anti-arrhythmic drugs which are known to have cost more American lives than the Vietnam war. (See "When Doctors Kill: why and how", by Joshua Perper and Stephen Cina)

2. It is also sadly true that in scientific research there have been many many cases of fraud or misconduct. In fact Professor David Goodstein from the California Institute of Technology in a recent book shows that most scientific fraud or misconduct cases involve biological science with medical doctors disproportionately represented in these cases. (See "On Fact and Fraud" by David Goodstein, Princeton Press.

3. There are many reasons for what I have said in item 2 but for the purposes of this thread I will point out three and in subsequent posts elaborate on them.

Scientific Method - over several centuries experimental methods and principles have been developed and these must be thoroughly learned and it takes a long time to learn and practice them. Indeed it is only when you do real work that its main points begin to sink in and getting to that point with humility coupled to knowledge is central to developing your research potential. It might help you to understand this if you think about the scientific method which generally starts with a theory and because it's a theory it can in principle be falsified. The theory tells us what data we need so we can set up an experiment to see if the theory has any validity. If the theory is validated then it can be predictive anywhere and everywhere - Archimedes principle works for everyone, anywhere all the time.

It is generally regarded that the RCT (Randomised-Controlled Trial) is a kind of gold standard but is it scientific? Just consider, in a typical trial are you testing a theory? Well usually not, we just test one intervention against another so whatever result you get it cannot be generalised - knowing the drug A is better than drug B for a particular condition is all you will know and obviously it has no bearing on drug C for the same condition. This is perhaps all in most cases we can do but you might like to consider that often in a trial we don't actually know how the drug or the disease actually works - we have no theory.

Ethics - in all science there is an ethical dimension and it has to be thought through with considerable care. In medical research it is paramount for obvious reasons. Indeed a large number of both fraud and misconduct cases can be traced to poor ethical standards. Ethics is about right and wrong and manifests itself as questions about intention and outcome. If you like you have an intention to produce an outcome of some kind and so need to ask two questions: are my methods acceptable and is the outcome right or wrong. In the modern world it is simply not possible to be dogmatic and say this is right and that is wrong although we see it everyday in religious writings. This is not to say we do not follow a particular religious persuasion but that is a personal choice and not every one holds to it. It follows therefore Instead what we must do is construct arguments about what is ethically acceptable based on some empirical but agreed principles and nothing else will really do. Mary Warnock summed up these principles as follows: (see Mary Warnock, "An Intelligent Person's Guide to Ethics", ISBN 9780715 635308)

Sympathy – that is when you decide and action you must think about (be sympathetic) to those that might be involved directly or indirectly.

Altruism (Unselfishness) – being ethical is not about satisfying your own position or your company and may mean sacrifice for you or others.

Imagination - this might sound like an odd idea but unless you use your imagination you will simply be unable to feel what it is like for anyone else or see what consequences there are. In a very real way imagination underpins the whole of ethics.​

These ideas are general; one might even say universal and as such might be usefully worked into research statements or research proposals such as:

The requirement is for excellence with a distinct character related to <a given organisation or setting>.

The standards and values relating to contractual, academic, financial and ethical considerations must be applied impartially everywhere and be of such a probity that they can be defended anywhere.​

In general we start with accepted ethical principles as I have outlined above and then work out a set of ethical guidelines that are specific to an industry; so one can easily find, in the literature, books on computer usage ethics, ethics in law, ethics in medicine and so on.​

Statistics - when one begins statistics it can seem quite easy but this is a false assumption and unless you really know what you are doing you can make horrendous blunders. These days we have SSPS and Excel so given a set of data one can generate a whole raft of statistics with zero effort. However, like any science, all statistics are hedged about with conditions and limits so interpretation of what you have been given is likely to be very hard EVEN if you are expert. Statistics is ultimately based on probabilities and everyone has difficulty in that area.

Sadly, the literature is legion with cases of scientific blunders because researchers do not understand what they are doing. For example, there are many cased where researchers confused correlation and regression, were selective in what data points they used, collected the wrong data and so on. So what we have to say here is NOT simple and if you are to get any benefit you will have to work hard. To give a simple example, suppose my risk of stroke is assessed as 12% and my doctor tells me that if I take a statin it will reduce my risk by 16% (we will not complicate it by adding in side effects). Almost no one outside of a numerate discipline can explain why your new level of risk if you take the statin is 10%. It is also uncomfortably true that even the best researchers sometimes get over-confident, not to say arrogant, and try to go it alone and do not get advice from a competent statistician - that is unforgivable and may amount to misconduct.
 
Last edited:
In this post I discuss two basic ideas which I shall call Research Style and Research Type but be aware I am INVENTING data here.

RESEARCH STYLE
At this stage you need to consider if your style is quantitative or qualitative. It is easy to confuse these two and simply think of them as describing data types but to do so means you are missing the whole point. In general, if you outcome is in some way intended to be predictive then your style is likely to be quantitative whereas if it is intended to be mostly descriptive then it is almost certainly qualitative. For example, suppose I decide to study infection rates after surgery.

Quantitative - here I would for simplicity chose two elements: the procedure and the infection rate. So over time and with a sample of patients I record relevant data. Now using this data I could process it statistically and predict say in knee surgery that the infection rate is likely to be 25% of patients.

Qualitative - knowing that 25% patients become infected after knee surgery is interesting but not of much use in deciding what to do about it. So my next research task could be to study surgical procedures used in knee surgery with a view to constructing a check-list which when used by the surgical team will lead to fewer infections. So here I am ending up with a description, as a series of questions (usually not more than 8), of what to do both at the start and end of an operation to minimise or prevent post-op infection.​

Thus you can see that the terms Qualitative and Quantitative are to do with the kind of OUTCOME you want not primarily the data itself - it is VITAL that you understand this point.

RESEARCH TYPE
Broadly speaking there are two types; the first is interventionist where you deliberately make a change in a situation and then study its consequences and the second is observational where you simply record what is currently going on. Using the same example as above.

Observational &#8211; using the knee surgery example, I do nothing other than record which surgical procedure was used and post-infection rates. I don't interfere in any way and make no changes.

Interventionist &#8211; using the knee surgery example, let us suppose I develop the necessary check-list to be used by the surgical teams - that is my intervention, the change I make. Now I start recording the data exactly as before about procedure and post-op infection rates. Therefore at the end of these two studies I CAN decide if my intervention made any difference at all. You can also see in this example how studies can and in some cases must, as in this one, be linked.​

In summary

1. Start by getting raw data on current post-op infection rates.
2. Next; a study to develop a check-list of things to do to minimise infections.
3. Finally, get the check-list into use and collect infection data again and see if there has been a reduction of significance (or of course an increase but we hope not). Here we might employ a statistical test of significance to be sure the change has made a real difference.

(If you are interested in check-lists in medical practice see the book "The Checklist Manifesto: how to get things right" by Atul Gawande (A surgeon) ISBN 978184 66831 38)
 
I will use two posts to outline how results can be high-jacked. Here is the first relating to how you generate and present results because they will help you look at what people say and allow you to exercise true academic scepticism. My advice would be to read through these ideas and then look through this or other discussion boards where people do introduce data and construct arguments and see if you can spot these poor practices. The ideas are common in all kinds of research and are unquestionably weaknesses of huge significance. I use terms typical to the scientific community but they are of course not necessarily universal but if you want to consider these ideas (though not all of them are mentioned) with extended examples then read &#8216;Bad Science' by Ben Goldacre.

Cherry Picking/Cooking - this occurs when you are selective or very selective about the outcomes or the basic data so you only choose examples that support your particular case or stance. Roughly speaking if you fiddle the basic data by being selective we might call that &#8216;cooking' and if you choose only favourable outcomes from the processed data that is cherry picking. Engaging in either of these practices means you took a short cut and it amounts to dishonesty.

Torturing the Data - "torture the data and it will confess to anything", as they say at Guantanamo Bay. Once you fix in your brain that a particular thing is true; you start seeing it everywhere in your data, you want it to be true and become &#8216;violent' if others disagree with you. Ben Goldacre in his book &#8216;Bad Science' recounted a story about Richard Feynman, undoubtedly one of the finest, though somewhat maverick, brains of his day who started a lecture with a very salutatory story. If you cannot understand the point he is making you really do need to do a lot of reading and re-reading of the ideas presented here.

You know, the most amazing thing happened to me tonight. I was coming here, on the way to the lecture, and I came in through the parking lot. And you won't believe what happened. I saw a car with the licence plate ARW 357. Can you imagine? Of all the millions of licence plates in the state, what was the chance that I would see that particular one tonight? Amazing...

Surrogate Outcomes &#8211; this is used in two senses; the first is the practice of inferring one research outcome from another and the second is to choose an &#8216;easy' outcome to obtain.

Inferring Multiple Outcomes - of itself being able to infer other outcomes from the one you have obtained might be a useful idea but if it is stretched or becomes routine then we may end up with anything - Ford cars have good engines therefore all cars must have good engines. In the medical line we might say a drug improves blood test results so must protect against heart attacks, lab studies on mice show that salmonella infused cancer cells stimulate an immune response that kills the cancer so this will happen in humans also.

Easy Outcome - always in research you have to decide what your main outcome will be right at the beginning and this is usually expressed as part of your research question. But as Ben Goldace points out there is a good &#8216;trick' and it is to use a surrogate outcome so instead of looking for a real world outcome such as death or pain use a surrogate outcome that is easier to attain. So if your drug is supposed to reduce cholesterol and so prevent cardiac deaths, don't measure cardiac deaths, measure reduced cholesterol instead. Reducing cholesterol is much easier to achieve than reducing cardiac deaths, the trial will be quicker and cheaper and much more positive!​

Zero Alternatives &#8211; this is similar to &#8216;torturing the data' but it occurs when you decide what you want to conclude and explicitly or implicitly only look for data that might support it; so in effect you do not consider alternative explanations of the data assuming that you must in fact be right. This is a very subtle form of malpractice because it can look to an outsider like Popperian falsification; a very proper scientific method, but without the necessary honesty with regard to what the data as a whole is telling you.

Hiding Methodology &#8211; part of the presentation of any set of data or results is to describe the methodology used; the research method, the research plan including details of how to extract and process that data; without this information any reader of the study has no real grounds for confidence in the outcome. Be honest, would you trust a research outcome if the study owners refused to tell you how they got their results? Once we know the methods we can check for flaws or weaknesses - for example, in medical research there are the so called Jadad scores. According to Goldacre, studies which don't report their methods fully tend to overstate the benefits of the treatments, by around 25% and that is practically fraudulent as well as possibly dangerous.

Hiding Demographics &#8211; part of any report is to show that the patients recruited for the study indeed met the criteria you set and so you can be confident that the data refers to a suitable population. With this information you can then decide where the results might apply &#8211; for example, a study looking at the relationship between blood pressure and exercise in young Asian adults may be of little value in deciding whether an exercise regime for the over 60s with high blood pressure in West Sussex will have similar benefits to the participants.

Hiding Results - Despite you best efforts things in a trial may come out negative and most find this disappointing so you might try to hide the negative results or don't publish for a long time. Goldacre points out that this is exactly what the drug companies did with SSRI antidepressants: they hid the data suggesting they may be dangerous and buried the data that showed they were no better than placebo. In any set of data there will be oddities, outliers and often they can make you drug look suspect so just delete them as spurious but if they are helping your drug look good include them even if they are spurious. Either way you have to wonder about the ethics of those who do this kind of thing and then pass it off as &#8216;cleaning up' the data'.

Get More Data - If you get negative results then don't publish but do some more trials with the same protocol in the hope that it might turn out &#8216;better' then you can bundle all the data so that the negative results are swallowed by more mediocre positive results.
 
Last edited:
This is my second post to outline how results can be high-jacked. If you want to consider these ideas (though not all of them are mentioned) with extended examples then read ‘Bad Science’ by Ben Goldacre.

Authority - are you taken in because the people who generate a claim are 'experts', well qualified so it must be right? Now of course we want to check on credentials but if we simply rely on those you will be making a big mistake. Sadly, the literature in almost every discipline it littered with academically well-qualified charlatans. By all means check on qualifications but don't fall into the trap of thinking that is enough for a result to be valid.

Journals and Review Sites – often in student work one cannot find a reference to a single reputable journal that has published a definitive study in the area under investigation. So when reviewing a study by anyone you must look at the citations given for authority and currency in particular. This is not a difficult task and most University libraries will have journal collection and there are review sites such as the Cohrane Collaboration in the medical sciences.

Interpretation - in research it is often said that getting the data is easy, processing it hard and interpreting is where we give up and lie down in a dark room and hope the problem will go away. Finding meaning is always going to be hard work because:

Clarity - results may not be all that clear, unfortunately, results may also be far too clear which should always make you think you have made a mistake (some things are just too good to be true).

Logical Form – this may sound odd to you but Popper suggested that sometimes the outcome of research might be what he called ‘tautological’ meaning the primary data had no bearing on the result. In other words the inferences and conclusions drawn were not a consequence of the study data itself and therefore are not reliable.

Patterns - if you look at any set of data long enough you will find patterns, sadly it is all too easy to be biased or lazy or tendentious and look for what you want to see or even insert what you want to see. So one could ignore any protocols, throw all the data into a statistics package, assume correlation proves causation and report as significant anything and everything.

Look for Definitions/Criteria – often a research report will offer definition or cite criteria and of itself this is not a bad thing. Sadly, however, one cannot prove something by definition or careful choice of criteria. So if these are offered then you must look at them with a sceptical mind and ask are they general or specific and are they valid or are they just a little too convenient. For example, if say a study cite some criteria for efficacy for a Blood Pressure drug then such criteria must of necessity apply to any blood pressure drug and if they don’t they they are suspect.

Play with the Base Line – in any trial there is also a base line of natural predisposing to certain conditions, for example, some people have a naturally higher risk of blood pressure than others. It follows that the trial or treatment group is already doing better than the control group quite by chance so leave it like that – gives you a head start. If however, the control group is doing better, then adjust for the base line in your analysis. Neither of these two alternatives is acceptable, you are hiding pertinent facts.

Embroidery – it can happen that you get almost no variation in your results and although this sounds a good thing it probably indicates a fault somewhere. With no variation there is nothing much to say ort explain so one often finds that researchers just fill up the p[ages with what amounts to an extension of the Literature Review not an analysis of the data. For example, suppose you have a questionnaire on pain and patients have to rate each answer some scale – that sounds find but sadly one often finds the questions or scale is to bland to be of value so everyone more or less agrees. The sort of thing one sees is questions like “Do you agree that early pain relief in post operative care is essential to recovery” – well here you can see that it is very unlikely that any one will disagree so in effects the question has no real value, no variation and you end up embroidering your answer.

Ignore Drop Outs - people who drop out of trials are much more likely to have done badly and much more likely to have had side-effects. Therefore there is a terrible temptation to ignore them because they can only make your drug/treatment look bad. So without the required scientific rigour and honesty you will want to or others will suggest that you ignore them, make no attempt to chase them up, do not include them in your final analysis. This is really fraud or very close to it and again it is a case of hiding uncomfortable data.

Knowledge - finding meaning implies you need to be really knowledgeable, expert in your area and you have to be absolutely honest.

Statistics - be very wary of statistics and always get an expert to help you decide what statistics you want and how to make sense of them - sadly this is often not done and serious blunders can and will be made by you if you don’t really understand what you are seeing in the data or unknowingly perhaps have only a shallow understanding of the various statistical measures your SPSS package churns out. Be honest, most researchers are NOT statisticians so don’t be afraid to ask for help at the start and end of a research project. Indeed one of the biggest blunders you could make is not to get good statistical advice at the start – let’s face it, once the data is collected it’s too late to change your mind and you may end up having to abandon the whole project because belatedly you realise the data is not suitable. Finally, be aware that you can process ANY set of data and derive a result but if it subsequently turns out there were fundamental errors in your choice of data then the project fails and you may well be humiliated and discredited because of it or worse your faulty results for example might show a drug to be safe when it is not leading to perhaps very serious human consequences or even worse discredit a drug that is latter found to be effective and safe.

Suppression - it is very tempting but also dishonest to suppress negative findings or findings you do not like for whatever reason and this can have serious implications say in medical studies and when you are discovered to have done such a thing your academic career is over. It may help you if you keep these two aphorisms in mind because they both point to the very worst in research: If facts do not conform to theory, they must be disposed of AND Researchers should always state the opinion on which their facts are based.
Conflict of Interest – very often in trials a drug company will pay the bills as well as define the whole study for you. This may be alright but it will put pressure on you during the study and you may well find the company wants access to the data as you go along and after looking at it make ‘suggestions’. This must be avoided and there must not even be a shadow of bias creeping in. One might note the case of diabetes drug Avandia where indeed the drug company had access to data as various trials progressed and that is simply not ethically acceptable behaviour.

Over or Inappropriate Generalizations - this is just another way of making sure you understand the notion of not arguing from the particular to the Universal. That is you get one result and conclude it now applies everywhere and sadly it usually occurs when you are desperate to prove your point at any cost. A good example was created on a discussion board I saw recently where on member argued that because one historical event was true and had supporting evidence then every other one had to be true as well. To give a more mundane example, this faulty logic would lead you to say after research: Ford cars have good brakes, therefore Honda cars must also have good brakes - this might be true but it does not logically follow.

Localization – this occurs when you fail to or refuse to see how your logic should be generalised or to put it another way, saying in effect that the logic only applies where you say it does and nowhere else. For example, suppose I argue that two accounts of the same medical event differ therefore they are fabricated. This argument cannot just apply to these two events so the generalised form would be that when any two event descriptions differ they must of necessity be fabricated. As you can see the generalisation is obviously not justified because it fails to take into account that the events may later be reconciled or they may just view the event from different perspectives.
 
In my next series of posts I will try to describe the kinds of critical evidence gathering mechanisms and will begin with one of the simplest and one everyone can carry out almost routinely. In these notes I am basing some of what I say on the papers you can find at http://www.pharmj.com/pdf/cpd/pj_200...encebased1.pdf

Case reports and case series.
A case report usually identified something unusual and interesting which you want to share with others. So a case report describes the medical history of a single patient in the form of a story and a case series is a collection of similar reports if they are available. They are usually used to record and/or alert other health professionals to rare occurrences. For example, if a patient who has taken two different drugs separately in the past takes them together and develops a life threatening arrhythmia and the doctor treating the patient suspects that the two drugs may be interacting, he or she could produce a case report. Because there is no control group, case reports and series are not valid statistically. They provide anecdotal evidence. The issue is that medical professionals will attach too much importance to this single occurrence and so when reading case report you must remain sceptical.

Here is an example where one result may be spectacular and tip the researchers/practitioners over into being convinced (see New Scientist 20 September 2010, No 2779). For example, NovoSeven (a recombinant factor VIIa manufactured by Novo Nordisk) was used in 1999 to treat an Israeli soldier with catastrophic blood loss which appeared to stop the bleeding allowing surgeons to carry out the repair. As you may know this drug is now used in trauma rooms around the world but it is not licensed in this area, there were no RCTs and what evidence there is suggests that NovoSeven is not effective in traumatic bleeding. Bear in mind that about 600,000 people a year bleed to death in hospitals so the pressure to trust this one case is enormous. Stories like this often end up as case reports in journals regardless of the fact that some patients would have survived anyway. In contrast almost no one reports cases where NovoSeven was used and the patients died.

If the case is reported in a Journal it will carry some authority and you need to be aware that drug companies know that the credibility of any case report depends on the authority and narrative ability of the story teller. For example, in 2005 a convention was organised to develop guidelines on the management of severe bleeding where Novo Nordisk paid for travel, accommodation, meeting facilities, honoraria, and preparation of the guidelines. In contrast there has been a large trial for an alternative called tranexamic acid (The Lancet vol 376, p23)

The moral is clear; in the absence of evidence from RCTs we must remain sceptical and case reports are essentially anecdotal so must be treated with care.
 
Last edited:
Evidence-based medicine or Evidence-Based practice is a term we encounter with increasing frequency. Basically, the term means using good evidence to make sound clinical decisions. That all sounds fine but what exactly makes valid evidence of clinical benefit and cost-effectiveness? So we are faced with two questions: “what is the best evidence?” and “how does one know if a piece of research is good enough to be relied on?”

The Hierarchy of Evidence
The term “hierarchy of evidence” is one you probably know and it describes how different types of evidence are ranked in terms of importance when clinical intervention decisions are made. Evidence can be from a primary or secondary source where primary evidence is from direct research such as a randomised control trial, whereas secondary evidence analyses primary evidence, commonly looking at several sources to carry out a meta-analysis). The hierarch is commonly thought of as an evidence pyramid though for our purposes here a list where if you have a choice case reports are the least reliable and systematic Reviews are the most reliable.

Systematic Reviews and Meta analysis
Randomised controlled double blind trials
Cohort Studies
Case control studies
Cross sectional surveys
Case series
Case reports​

I have already spoken in an earlier post about case reports but here is a short note on systematic reviews.

Systematic Reviews
Before research can be readily accepted it must be reviewed by competent specialists both medical and statistical. Reviews are an important part of finding meaning and significance in research outcomes but also it is a way of preparing for research. Finally, as a student you will almost certainly have to carry out a project of some kind so again a competent review is essential to success.

Becoming Expert - Constructing a review might simply be thought of as a way for you to become expert in a particular and usually well defined area. For example, suppose you wanted to do research in fibromyalgia, a musculoskeletal condition that causes pain, fatigue and other symptoms then your first step is to become expert in that area and you do that by first reading the literature and then through clinical practice working with patients under supervision.

If you do not do the reading and practice then you simply will never know enough to even start to think about a research project or assessing the research of others. For example, suppose you wanted to assess pain in the above condition and you decided to construct a questionnaire – then it is OBVIOUS that unless you know a lot about the condition you simply will not know what pertinent questions to ask and so the whole research effort will be wasted. Additionally, no one will trust your findings anyway because you have not, by way of reading your personal written review, demonstrated high levels of competence in this subject area.

Reading the literature implies that you have at least the basics of medical training and then it is a case of unearthing the primary sources. Let me be plain here primary sources are not books; though you might well read several to get up to speed. Primary sources are regarded as journal papers, government reports, research reports and similar first hand and current accounts of research in your chosen area.

Mechanisms - Now you can do the review yourself and in many ways that is best. So you might start these days by looking at Wikipedia, NOT because it is an authoritative source (because it is definitely not) but it often has an attached reading list and that can be an invaluable starting point for you. In this case in Wikipedia there are 142 references to articles on fibromyalgia.

Now the work begins as you build up a reading list and get access to the various journals and other papers. Always look to build up so look at the dates on each article and in general look for things that are current and that roughly means no more than 10 years old. However, be aware that in every discipline there are ‘classic’ papers which are foundational so look at those and also try to find out who are the main or major players in your chosen area of research.

This cannot be rushed and if you are to do it properly it is going to take several weeks at least working at it perhaps 20 hours a week and remember this is just the beginning of your study. Another way is to use reviews constructed by others and one of the most well known and highly respected are those produced by the Cochrane Foundation http://www.cochranfoundation.com/. The foundation has been at work for 30 years and is an international, not-for-profit organisation of academics, which produces systematic and systemic summaries of the research literature on health care, including meta-analysis.

One final word here is that in the review we are NOT studying research methods or protocols but rather the subject itself because these research methods and protocols are common to all subject areas. So, the study of methods and protocols is a separate subject area and you must not confuse the two.
 
Research Designs and Data
Before I look at withdrawals and drop-outs from trials and how they have to be considered when generating a research outcome it is perhaps wise to consider the data itself. Given a trial you have to decide what data you want to obtain and record and although this might sound easy in practice it requires considerable care and thought and if you get it wrong, the whole trial fails or worse. In any trial it's not just the data there are always going to be a large number of other variables that have to be managed and even when we come to the samples points themselves it is obvious no two patients are the same.

Hypothesis - Normally you start with a question that becomes your hypothesis and classically we use a null hypothesis that asserts that the new therapy is not as good as some other or more generally we say there is no difference. If at the end we do find a significant difference between therapies then we invoke the alternative hypothesis that the new therapy is superior. However we have to set at the start what we regard as significant otherwise we are in danger of rejecting the null hypothesis when it is in fact true and accepting it when it is false. The basis of the hypothesis is your research question and this might be something along the following lines.

  1. The mean level of dopamine is greater than 36 in individuals with schizophrenia.
  2. There is no difference in efficacy between the new drug and the standard drug in treating tuberculosis.
  3. In 65 y/o males with atria fibrillation, does anticoagulation with Warfarin or anti platelet therapy with aspirin give a greater reduction in the risk of stroke
Be careful here as it's easy to believe that formation of a hypothesis and the data needed to verify it are rationally derived. But of course one always has to make judgements and that is not an entirely rational process, that is not how our brains work and whenever we make a judgment it is a combination of our rational and emotional brains (if I can put it like that) that are involved. If you find this difficult to accept then just look say at the Mayo Clinic's criteria for diagnosing depression and ask yourself how they got those criteria and are they now absolute in the same way that Archimedes principle or Gravity is the same for everyone, an absolute law of nature?

Complications - Getting data is complicated by things like cost and the fact that some measurements are notoriously inexact; blood pressure, ECG interpretations, x-Ray interpretation and pain scores being among them. Be aware that the data that you need to generate your trial outcome may not be the same as what you actually collect from the patients since raw patient data may need to be pre-processed via intermediate calculations. For example, you may need to use raw data to calculate things like Allowable Blood Loss, Body Mass Index, Cardiac Output, Vascular Resistance and so on. Incidentally, there are quite a few Apple Apps for this kind of thing: MedCalc 300 ($5) or MediMath Medical Calculator ($2) and it is likely your hospital will have suitable IT systems for this but of course to use these tools you have to know what you are doing medically.

One additional difficulty associated with inexact measurements is that there will always be a margin of error and if you start rounding then any calculations can become inaccurate, very occasionally, wildly inaccurate because in some cases the data becomes (in mathematical speak) ill-conditioned and a warning that this might happen is when the calculations includes division. In general then, if the raw data is used in calculations it is best not to do any rounding until you get to the final answer &#8211; that way you reduce any possible error.

Deciding on Data - When deciding on data there are firstly two ways to think about it and in clinical trials they arise because sometimes we know the way a drug works or at least have a theory of how it works and that theory will help tell you what data you need to collect. If the drug mechanism is unclear (as is often the case) then you have to be inductive and make an informed guess at what might be useful.

Obviously, having a good theory on the drug mechanism is a much stronger place to start. Secondly, you must consider the symptom picture of how a condition presents and you may have a good or very good understanding of the disease mechanism and again that will indicate what data you need to record. But remember, symptoms are a very subjective thing, so almost every conceivable way of establishing the benefits of any treatment must start with the individuals experience and building from there and this is one reason why a clear view of how a particular intervention works with a known disease profile is much to be preferred.

Be warned, that you must consider the so called co-morbidities as there may be several other drugs involved with a particular patient and these co-morbidities can play a major role in contributing to a patients overall clinical condition. For example, you may find that it turns out one group in the trial has a substantially higher number of smokers and so unless you abandon the trial you have to take that into account since one group has a higher level of morbidity and mortality. You might think this kind of thing can be ruled out by careful patient selection but if you try to rule out everything you will never get a trial group together.

Data types - When you have selected the data you need to collect then you must become aware of the kind of data it is. In summary, one has to understand the difference between dependent and independent variables and once that is determined then consider if it is quantitative or qualitative and what kind of scale you are using. In general, quantitative data is usually associated with a research outcome that is in some way predictive whilst qualitative is invariably used when we want an outcome that is descriptive or explanatory in nature. That is a quantitative outcome might say that drug A is better than drug B whilst a qualitative one would say explain how drug B works.

Classification and Demographics - If all patients were exactly the same we would only need to test one of them; unfortunately, in the real world there is potentially a large pool of variation. This means that as best you can you define a profile of what each patient should look like if they are to be selected for the study. An important part of classification is the so called demographics: age, sex, education, ethnicity, life style etc. This might not sound important but a study for example of diabetes on mainly elderly Caucasian women in the UK may have only very limited value in the care of young males in the Indian sub-continent with the same condition.

Once we have decided on demographic details you make very specific classification choices related to the condition you are interested in. You might like to go back to the section on Hypothesis above, look at the three research questions and try to work out what data you need to collect in each case.

Warning - Be careful, if your classification is too loose almost anyone can get selected and if it's too tight you will have enormous trouble finding patients who fit your criteria.

When all the data is in you will have to report on the classification defined and demonstrate that the sample matches that profile. If you cannot so demonstrate the trial is a failure because in effect you have a set of results but don't know who it applies to or it applies to a different population and so the results may or may not be of any value (which in these circumstances is usually the case).

Data Collection Protocol - Once you have decided on the data you will have to work out how it is to be collected and that protocol will have to be tested and agreed by all parties, including in many cases the participants. The protocol must then be agreed with any ethical committee and later be seen as part of the research record and again at the end you will have to look back and evaluate it and if necessary add some qualifications to your outcome especially if there were difficulties or feedback indicates that collection was in some way ineffective, difficult or compromised.

One should also note that any protocol should also be in some sense longitudinal and look for side effects and that might mean taking extra measurements not strictly necessary. For example, the drug therapy might lead you to worry about say liver damage and so you wisely would monitor this. Indeed, in general you must not suppress negative data because perhaps you can explain it, or at least think you can. Everything must be a matter of record. It is perhaps interesting to note here that studies which don't report their methods fully do overstate the benefits of the treatments, by around 25%.

Data Trails - There must be a full record of all the data collected and nothing must be suppressed or discarded. Ultimately, the data plus your research design is your proof and it would also mean someone else could process your data and should therefore get the same results. Similarly, if they also have your complete design everything can be fully checked and hopefully there are no flaws. I have written a note later on Evidence Based Practice (EBP) and you will see then how important it is when evaluating evidence that you have sight of the data, the suggested outcome and the methods used. If ANY of these are missing then a reliable evaluation is not possible and in my view would make it impossible for you to come to a truly informed position. It's the old dilemma really, if you are not told the whole truth then you cannot be sure you are making the right decision.

One after thought is that is very unlikely that your trial will have no flaws because there is bound to be a compromise between what you want to do and what can be afforded. But you must do all you can to make sure the trial and associated protocols, including data recording and processing are as secure as you can make them.
 
Last edited:
You may find this of interest as a simple but complete example of a study design that might be undertaken by a student. In practice large hospitals or other institutions will likely have a department or section to help you with designs.

Research Plans/Design
Here is an interesting outline example that illustrates many research principles of importance and you would do well to discover what they are. This is a medical example but for our purposes it is easy to understand although reading this will not qualify you as a clinical practitioner so please do not go off and try this out without supervision.

Let's say someone is suspected of suffering from depression, in medicine it is not something you can quantify in on any fixed scale; there is no machine you can sit in front of or any physical test one can do and end up with some measurements of your psyche on a computer printout. So the question is how can we get reliable diagnoses and begin treatment? One possible way is to set criteria or you can think of them as standards instead such as those shown below devised by the Mayo Clinic based in Rochester, Minn. Thus, if a person markedly exhibits the majority of these said criteria then you can reliably give them the diagnosis of depression.

Loss of interest in normal daily activities, Feeling sad or down, Feeling hopeless, Crying spells for no apparent reason, Problems sleeping, Trouble focusing or concentrating, Difficulty making decisions, Unintentional weight gain or loss, Irritability, Restlessness, Being easily annoyed, Feeling fatigued or weak, Feeling worthless, Loss of interest in sex, Thoughts of suicide or suicidal behaviour, Unexplained physical problems, such as back pain or headaches

Be aware here that these criteria are not absolutely measurable and the best we could do is say use a 5-point nominal scale from totally agree to never get that feeling to assess each them. If you do not know what is meant by nominal scale there is a post in this thread that deals with all types of scales.

Basics – at the start of any research project it is best to think through certain aspects so that you have a clear view of what you are doing, usually: Problem, Target, Plan and at this stage particularly:

Outcome – having thought through the above one now has to decide what you will produce at the end of the project. Some possibilities are: a survey report, a set of recommendations, a plan, some qualification on the use of the criteria etc

Actor – it is very important to think about a person or persons who will receive and use your outcome and what they will do with it.

Research Style – at this stage you need to consider if your style is quantitative or qualitative. It is easy to confuse these two and simply think of them as describing data types but to do so means you are missing the whole point. In general, if you outcome is quantitative then it is in some way intended to be predictive whereas if it qualitative then it is intended to be mostly descriptive.

Study Type – broadly speaking there are two types; the first is interventionist when you make a change in a situation and then study its consequences and the second is observational when you simply record what is currently going on.

Thinking Process – begin by asking how where these standards/criteria set since this obviously has to be done with stringency as people’s lives are involved. However, they did not just appear out of the ‘air’; some sort of rational process was used to get them so what could those thinking processes have been? In most cases those processes will have been either

Deductive - meaning they were generated from some theoretical standpoint; such as you have a kind of theory as to what causes depression or possibly a theory of how a particular drugs works

Inductive - meaning that they are a kind of best guess where perhaps you have noticed that depressed people exhibit certain characteristics. Therefore, at this stage, you do not have enough information to form a theory so in a way you are hoping these results will show that these criteria are good at describing or even predicting depression.​
Part of this will also be consideration of some or all of the following points:

Qualifications - Must these criteria of necessity be set by an ‘expert’ or could anyone do it or at least suggest possible criteria?

Errors - Is it possible that the person or persons setting the list could make a mistake or mistakes even if they are unquestionably expert?

Aspects of Depression - the focus is on the pathology of depression and this list of criteria might be thought of as aspects of the condition. It is therefore important to ask is the list complete, are there any other relevant aspects that are missing or perhaps the list is too long, or some aspects included but have no real bearing and so they can be discarded?

Criticality - consider if this list might lead to false diagnoses because these symptoms may also indicate other conditions.

Validity – meaning here that the criteria do indicate depression and not something else?

Reliable - meaning here that the criteria are consistent over time and so useful in diagnoses and therefore the research is worth it?

Ethics - Suppose we use this list to collect data then are there any ethical issues that we might need to deal with and if there are how can they be accommodated or are they insurmountable? This will cover everything from data collection, to processing to storage to dissemination.​

Research Method – to test the list for acceptability we need to do some kind of test on it and so we need a framework to confine and guide our work. Possible Research Methods are: experiment, survey, case study, vignette, grounded theory, action research, etc. For our purposes here let us assume that the survey method is chosen but as a question for you what are the factors you might take into account in such a choice? Once the research Method is decided and justified we must go through all the following stages.

Population – can we identify where our possible respondents come from. This is not a simple task and careful thought and a practical outlook are needed?

Sample Frame – can we obtain a list of some kind that can be used to select respondents who we might collect data from? This is a KEY point in the whole process and unless this is done well the sample may well be totally invalid.

Classification - classification means that we collect data about the respondent themselves (as opposed to data about the medical condition on the depression criteria listed above) - this might be age, ethnicity, job profile and so on. We need an exact protocol/set of criteria to be sure we collect data from a relevant set of people? One final aspect of classification is to decide if we should just use people who have already been diagnosed with depression, those suspected of the condition or just anyone?

Selection Method – once we have the sample frame it will be necessary to select names from it and there are many ways to do this: random, purposive, cluster etc but ideally, we would want to use a random number generator to select people from this list. Do not be tempted to ‘invent’ ways to randomise because they invariably turn out to be systematic and may therefore invalidate all your results – many research efforts are ruined by inappropriate randomisation.

Blinding – one might consider blinding here were neither the clinician nor the patient knows what intervention, if any, is involved. However, blinding to be useful is totally dependent on the randomisation process. One also may have to consider the placebo effect.

Choose/Calculate a Sample Size – the population might be quite large and it can be shown that statistically a sample of sufficient size will give us the required level of confidence in our results – so how can we calculate a sample size, how many respondents do we need? There are many way to do this and you will have to do some research to see what might be appropriately in your case. The sample frame and the sample size will define your precision, that is how representative your sample is of the population.

Data Collection - Suppose we decide to collect data with our survey then what might be the most practical way to do that: interviewing patients, examining clinical records, a patient questionnaire or it might even be possible to do it by observation?

Mode of Collection - can data be collected by anyone such as you, a nurse or must it be of necessity a clinician? Just work through the list of possibilities above and suggest who might do it but also consider if it may be automated?

Research Plan – since we are trying to establish if these criteria accurately and reliably indicate depression then we have to have some way of knowing whether clinically the respondent is thought to be depressed or not. So we might proceed in several ways:

a. Take a sample of the general population and collect results and come to diagnoses. At the same time take a sample from clinical records of people who have been diagnosed as depressed and collected the same data. If on processing these two sets of data we find no significant differences then one would question the usefulness of such a test.

b. Use clinical records and extract answers to each query that way we can later process the data as a whole and see if there is any correlation between clinician’s findings as recorded in the patient record and the result we might obtain from the collected data. If a correlation exists we might then argue that the criteria are a useful guide. Alternatively, you might use the same patient records but then go to each patient and fill in the questionnaire and after that the work is the same.​

Data Type - What kind of data is it that we are trying to collect here and what kinds of processing can we apply to it? Will it be opinions, factual, etc. One consideration is to ask is the data nominal or ordinal because will influence the way we process the data. In this case it is defiantly nominal as there can be no scale were say ‘irritability’ can be measure with precision – what we mean here is that we are not able to say things such as ‘I have zero irritability’ or ‘I am twice as irritable as I was yesterday’ and of course there is no way to be sure that different people are registering the same levels.

Design of Questionnaire – this will be dealt with later but will also include methods of checking reliability and validity. Do not be tempted into thinking that the design of a questionnaire is simple, in practice it is one of the most difficult instruments to design and use. In this case the questionnaire is designed on one dimension that of depression and this is expressed by listing several suspected aspects along some nominal and bi-polar scale.

Pilot – if the study has any significance it should also be preceded by a pilot to make sure that our design does not have any flaws.

Processing – once the data has been collected it can be processed and this is most often done in two stages. The first just summarises the data into some convenient form based on the raw original data: tables, catalogues, charts, statistics etc whatever seem appropriate or get it into a form that will allow you to generate you outcome. One might start using Cronbach's alpha to check the consistency of the questions (are they all measuring the same thing) and then generate simple measures like standard deviation and more complex ones such as seeing if there is any correlation between questions.

The last stage is to bring all you finding together and generate your outcome, To do this you can look at the general format used to generate survey reports, if your outcome was model you might start by considering the universal process model, you might want to generate a best practice so to do that you might us a best practice model to guide you and so on.
 
There is a fascinating article in New Scientist Issue 2790, Page 30 dated 11 December 2010. It is called Not all Placebos are Born Equal by Irving Kirsch. Essentially, the article asks are we wasting billions on worthless drugs because the complex placebo effect is undermining clinical trials and he suggests instead we use an active Placebo? (might be the basis for a useful final year project/dissertation)

Introduction - Everyone knows that some side effects in patient information leaflets are scary indeed. But placebos are not entirely good news for clinical researchers who need to disentangle the physical actions of "real" interventions from the effects produced by the power of suggestion. It is a secret to no one that one of the biggest disease groups caught up in the complexities of the placebo is mental illness (affects nearly 20 per cent of us at some stage and fuels a $19-billion drugs industry). Kirsch, extracted reams of unpublished drug trial data and has a bold-sounding theory: the placebo effect may account for all of the benefits of antidepressants! Obviously proving this will be extremely difficult.

New Research Designs - one possible route is to use more effective placebos in double-blind clinical trials. Placebos are used RCTs, in which patients with a particular disorder are randomly assigned to receive either the active drug or a placebo (a dummy pill with no active ingredients). The medication is approved only if its effect in treating the disease is reliably superior to that of a placebo.

Breaking Blind - A critical aspect of the RCT is that it be double blind, meaning that neither doctor nor patient knows whether the patient is receiving drug or placebo because of the need to control for the effect of expectancy (the belief that one is receiving treatment. Current thinking is that if patients know they have been given a placebo, the placebo effect would disappear; conversely, patients would show an especially large placebo effect if they were certain they had been given the drug being trialled.

The question is whether RCTs are up to the job of keeping patients blind. Studying clinical trials of antidepressants shows they are not because many patients and doctors work out who has been given what. In one of the few clinical trials in which patients were asked to guess what group they were in,78% were able to do so accurately and doctor's accuracy was 87% per cent. If patients and doctors "break blind", then differences between drug and placebo in clinical trials may be illusions. So how are patients and doctors breaking blind? One possibility is that patients on the real drug are tipped off by the side effects produced.

The problem of breaking blind on the basis of side effects may be widespread because patients are rarely asked to guess what they have been given in clinical trials. But a few studies have shown they are often able to so accurately. Kirsch is sure most prescription drugs are more effective than placebos, but if the difference is small and the drugs produce noticeable side effects, it is possible that there may be no real drug effect implying that RCTs don't do what they are supposed to so wasting billions on worthless medications.

What can we do - can we find a design aimed at getting around the problem of breaking blind? One route involves using "active" placebos, a real drug that does not affect the condition being studied but which does have real side effects. For example, Atropine has been used in a few clinical trials of antidepressants, Atropine is not an antidepressant, but it has some of the same side effects, including a dry mouth, insomnia, headaches and drowsiness. Active placebos might seem like a major alteration of clinical trial methods, but the change is less radical than it seems. Placebos are necessarily composed of some substance but no substance is truly inert and it is a fact that the placebos ingredients were usually not disclosed.

Kirsch's active-placebo proposal is that the ingredients of placebos be chosen to mimic the side effects of the drug being tested and they should be disclosed when clinical trials are published. Active placebos alone may not be enough to solve the problem of breaking blind, however. In the atropine trials, the real drugs produced more side effects than the active placebo, and patients were more accurate in guessing the drug they were on than would be predicted by chance. So there could be spurious side effects even when active placebos are used.

An Idea - Meanwhile, there is one easy thing we can do: assess the level to which patients are breaking blind in antidepressant trials that use ordinary placebos. All we have to do is ask patients to guess if they have been given the real drug or the placebo - it would cost next to nothing. Why not make it a requirement for licensing new drugs? And by finding out how many clinical trials involve patients breaking blind, we could end up making a strong case for clinical trials using active placebos.
 
Last edited:
Drop outs and Withdrawals from Trials
This is my last note on trials and is a difficult area because often it can leave you with gaping holes in your data making processing and generating your outcome very difficult and sometimes unreliable. Roughly a drop out is random and usually you have no idea why it has happened but the patients just fail to turn up or send in data. Drop outs can occur before the trial begins or anywhere along the way. There may not be much you can do about it (someone might just die) but it will be exacerbated if you administration is shoddy. For example, I once saw a trial where patients had to book the first appointment on line and that was excellent and the follow up was booked after the patient data had been collected. However, if for any reason a booking had to be changed it had to be done by telephone but when you rang the number it redirected you to one of two other numbers which almost never seemed to answer. Not unsurprisingly, many patients just gave up and walked away from the trial saying things like “if they can’t be bothered then neither can I”.

Withdrawals are slightly different and are much more a considered event with reasons such as the trial is not helping them, they are not getting better or sometimes they end up getting worse. There may be family or work issues or the move to a new location but at least you know about it. But always time and money is wasted especially with drop outs where it is very unpredictable. Remember, you will have worked hard to get funding and support for your trial and if it now starts to come apart because of drop outs or withdraws someone is going to want to know why. To see what I mean, imagine you have booked an MRI scan as part of your research but no one turns up – I think you will find your mangers will not be very pleased at the waste and other colleagues will be possibly angered because their patients had therefore to wait longer for a scan. I am providing two summaries and in the first case you can find the full text at http://lost-to-follow-up.com/patient-dropouts/

Drop outs/Lost to Follow-Up Implications
In any clinical trial, patient dropouts and patients lost to follow-up can pose a major challenge. Each patient represents a significant amount of time, effort and other resources, so that a high rate of patient drop out (they may drop out in such a way that it is impossible even to follow it up) are not only costly but pose a risk to the interpretation and validity of the intended research findings. Retention depends on a combination of patient, physician and coordinator issues–factors that need to be carefully organised and evaluated to ensure success. Analysing rates of patient dropouts (which can be as high as 40%) and when they occur can give important information about patient characteristics, study design and conduct and respective intervention. It is also important to note the impact on patient dropout rates between the control and intervention groups. It is suggested that in a phase II trial the cost is about $6,500 so if you might anticipate losses so recruit more but that in turn will increase the study cost and depending on the number of subjects involved that could be a very significant figure.

Common Reasons for Drop Out: Age - with younger (<50) participants at significantly higher risk than older participants. Minorities and psychological and behavioural characteristics.

Common Causes of Withdrawal: The most frequently cited reasons are competing life demands, logistical/transport, parking, distance to travel, timing of appointments, inflexibility, demands of the study, and lack of motivation and commitment, stress related to family care responsibilities, interference with work, lack of time, complicated and cumbersome record-keeping and paperwork associated with a study, lack of any positive reinforcement and patient motivation, remuneration or expenses.​

All of these factors which determine whether patients drop out/withdraw have implications for developing strategies that enhance the likelihood of study completion. Strategies must use multiple methods to enhance patient retention and include initiatives that address multiple barriers and facilitators of research participation, such as motivation, convenience, and data tracking.

In this second example you can find the full text at http://www.biomedcentral.com/1468-6708/3/4

Dealing with Missing Data
Missing data cause bias in statistical analysis and it is important to realize that there are no universally applicable methods for handling missing data. It follows it is much better if you do all you can to minimize the problem of dropouts or withdrawals at the design stage and during trial monitoring. Reasons for patients dropping out of the study include death, adverse reactions, unpleasant study procedures, lack of improvement, early recovery, and other factors related or unrelated to trial procedure and treatments. To demonstrate with simple algebra the effects on key statistical concepts surrounding missing data consider a study with 54 patients. However, suppose that only the 30 patients who completed the BP trial were included in the paper's analysis. Defining an effective control of BP criteria the authors stated that 80% completed the protocol with effective control of BP and no side effects. Obviously, the authors only counted 24 patients out of the 30 completers (that is some did not meet the success criteria) and obtained 80% and ignored the 24 patients who dropped out prior to the scheduled end of the study at six months.

At this point it is worth recalling that people who drop out of trials are much more likely to have done badly and much more likely to have had side-effects. Therefore there is a terrible temptation to ignore them because they can only make your drug look bad. So without the required scientific rigour and honesty you will want to or others will suggest that you ignore them, make no attempt to chase them up, do not include them in your final analysis. This is really fraud or very close to it.

Now, to show the importance of the above paragraph if we only consider the 24 who met the criteria then we get 24/30 = 80% as the reported value. But the correct summary should take account of the drop outs so using the 24 success candidates again we get 24/54 = 44.4% which of course gives a totally different impression of the efficacy of the treatment. The point is that many who dropped out would have done so because of problems with the treatment and others would have had no problem but nevertheless those 24 dropouts cannot safely be ignored. The minimum processing we would expect here is to perhaps assume all dropouts had side effects and then none had side effects or all drop outs were good responders to the treatment or none were good responders. That is we work out a range of possibilities. Obviously this is much more complicated to do and much harder to assess but nothing is now hidden. Several points can be generalized from the simple illustration given above and closer examinations of the other examples.

It does not take much missing data to mislead an investigator so always account for every subject randomized to the study. Using the total number of randomized subjects in the denominator is a step towards accomplishing this principle, whether it is to calculate an average or a proportion because it may prevent you exaggerating the results. This principle is known as intent to treat (ITT).

Record and report the reasons for withdrawal and the number of subjects in each category of withdrawal according to their treatment group because the reasons for patients dropping out can be used to help assess the nature of the missing data. Statistically they are called informative missing data and is useful in estimating the true response. When no particular information is known about the missing data it is described statistically as missing completely at random so the missing data is not informative.

I would not feel at all comfortable about these approaches unless the medical team had consulted with statisticians when dealing with missing data because there are many possible methods available.

Methods of Dealing with Missing Data
In any data analysis, the first consideration is the objective of the analysis. In the presence of dropouts (considered a trial flaw), we have to ask what would be the treatment effect in the presence of dropouts - patients drop out totally from the study and no data are collected and drop outs where some data was collected.

Imputation methods
In general, the basic idea of imputation is to fill in the missing data by using values based on a certain model with assumptions – YOU MUST keep this in mind. The attraction of imputation is that once the missing data are filled-in (imputed), all the statistical tools available for the complete data may be applied. Possible imputation method are: last-observation-carried-forward (LOCF), Proper multiple imputation (PMI) methods which use regression models, partial imputation (PI), ranks or 'scores' of the observations instead of the actual values (For example, death would be given the worst rank, then lack of efficacy, adverse reaction, patient refusal, and so on)

Conclusion
The issue of what to do about missing data caused by dropouts in clinical trials is a research topic that is still under development in statistical literature. The issue of handling missing data is intrinsically difficult because it requires a large proportion of missing data to investigate a method of filling it in. On the other hand, a large proportion of missing data would make a clinical study less credible. The best available advice is to minimize the chance of dropouts at the design stage and during trial monitoring.
 
You may be looking for or planning a project or dissertation and I have found and read two articles in New Scientist (they include further refs) which you might find shocking or surprising or fascinating or absurd. Most GPs I speak to have never heard of these ideas so have a look, you might even go beyond a project and have something that can be published given that it seem the idea are not widely known.

Taboo transplant: How new poo defeats superbugs http://www.newscientist.com/article/mg20827911.100

Faecal transplant eases symptoms of Parkinson's http://www.newscientist.com/article/mg20927962.600

Note. In some cases you may require a login to see the full article although your University library will have a copy

Just to be clear:

Project &#8211; where one collects primary data at a point in time from a defined source or sources in order to answer a Research Question centred on solving or partially solving a known problem.

Dissertation &#8211; where one collects information from Journals or other respectable and academically acceptable primary sources. The intention usually is to speak at length for example about a new technology or idea by summarising the latest available information.

Thesis - Implies that the work is based on some hypothesis or premise, which is put forward without proof. The report then sets out to prove the premise and where this is not possible, to offer some discussion and evidence for its validity.​
 
Last edited:
Medical practice became almost static after the publication of Avicenna's great cannon partly to do with increasing religious authoritarianism but also because some key concepts were awaiting discovery: the production of reliable Microscopes in the 1600s, discover of Bacteria in the 1700s and discovery of the virus in the 1900s and this coupled with the enlightenment produces an unprecedented and never before seen a burst of activity that remains with us still.

Avicenna (Ibn Sina Ibn S&#299;n&#257; 980-1037) wrote some 450 treaties but is most well known for "The Book of Healing" and "The Canon of Medicine", which was a standard medical text at many medieval universities as late as 1650. What is interesting when looking back and what might seem strange to us is that even though the civilisation of the Abbasid's was religious they understood that science stands outside of religion in the sense that its principles, theories and practices can be discovered and used by any one. In simple, terms we might say Archimedes principle, Ohms Law, the laws of motion, penicillin or Gravity don’t belong to anybody but are the same and apply to everybody be they an ardent believer in faith or a zealous atheist. From our point of view in this posting, Avicenna, over 1,000 years ago understood that you had to collect and sift through evidence to find the best treatments and practices and you have to keep doing it – so in a way he was perhaps the first to use EBP.

One final point is that it is certainly true the often brilliant work and associated Arabic texts emanating from the Islamic translation movement formed the majority of scientific and medical writing and these were either original works or translations mainly from Greek or Indian writers and of course they had to be made by hand printing or hand-copying. However, in 1440 the mechanical printing press was invented with movable type by Gutenberg. The mechanization of bookmaking led to the first mass production of books in history in assembly line-style. A single Renaissance printing press could produce 3,600 pages per workday, compared to forty by hand-printing and a very few by hand-copying. Amazingly, for the time, books of bestselling authors like Luther or Erasmus were sold by the hundred thousand in their life-time.

Since Arabic was too complicated to set on these early machines it could not be used for books so everything ended up being translated into Latin which in any case was more widely known in the West and was immeasurably easier to set in movable type. In summary, to move forward we need easy dissemination of knowledge, money because science is a very costly business and centres of learning which encourage the principle that all knowledge is provisional and open to falsification.

Defining Evidence-Based Practice
EBP as we know it was defined originally the 1980s and was rather simplistic because of the way it saw the three elements: practitioner’s individual experience, best evidence and client values and expectations overlapping but only to a small degree. Additionally, it appeared to suggest that the process was simple and rational but it failed to appreciate the role of practitioner judgement as being not entirely a rational process as well as tending to ignore client values and preferences. Treatment then was merely defined as a management issue against a list of what intervention to use automatically for what diagnosis, regardless of your professional expertise and special understanding of idiosyncratic client characteristics and circumstances. A comprehensive definition of EBP is as follows where the three core elements are: clinical state and circumstances, client preferences and actions and research evidence.

EBP is therefore a process for making practice decisions in which practitioners integrate the best research evidence available with their practice expertise and with client attributes, values, preferences, and circumstances. When those decisions involve selecting an intervention to maximize the likelihood that their clients will receive the most effective intervention possible in light of the following:

1. The most rigorous scientific evidence available;
2. Practitioner expertise;
3. Client attributes, values, preferences, and circumstances;
4. Assessing for each case whether the chosen intervention is achieving the desired outcome; and
5. If the intervention is not achieving the desired outcome, repeating the process of choosing and evaluating alternative interventions.

This newer EBP model therefore, illustrates that it is practitioner expertise that allows for the integration of the core elements of clinical state and circumstances, client preferences and actions and research evidence by practitioner expertise. If this is taken seriously it avoids a common misconceptions of EBP that characterize it as requiring practitioners to mechanically apply interventions that have the best research evidence. It follows, that none of the three core elements stand alone but work in concert by using practitioner skills to develop a client-sensitive case plan that utilizes interventions with a history of effectiveness. In the absence of relevant evidence then choose best evidence. This of necessity is much more demanding of the practitioner because in effect it moves the practitioner roles from diagnoses and simple intervention selection to one of diagnoses and analysis of possible client centred intervention strategies. This EBP model has to be seen as cyclic and as a treatment begins you review, reconsider and if necessary make changes. It is perhaps worth saying here that the definition of treatment is very wide and might be a new pill, a new protocol, a new checklist, a new method, a new policy or indeed anything that might impinge on patient care and of course this applies to all practitioners no matter what level they work at. This is consistent with the scientific method; which holds that all knowledge is provisional and subject to refutation.

It is as well to recall a remark by Ben Goldacre when he said “If anti-authoritarian rhetoric is your thing, then bear this in mind: perpetrating a placebo-control trial of an accepted treatment; whether it’s an alternative therapy or any form of medicine is an inherently subversive act. You undermine false certainty, and you deprive doctors, patients and therapists of treatments which previously pleased them”. So when you do get new evidence it may shake your confidence and disturb your normal prescribing practice and make you wonder if you have been doing more harm than good but you must persist in the reliable belief that you are doing your best to select the most effective treatments.

Keeping up to date and Some Examples
It is always difficult to keep up to date but this can be helped by getting together with other practitioners, courses, journals, drug company papers, government papers and so on. You can use the following and many other sources as well as a host of other that are particular to a medical area.

NICE – National Institute for Clinical Excellence (http://www.nice.org.uk/) and you can also here join one of their committees.
The Cochrane Collaboration - http://www.cochrane.org/cochrane/archieco.htm
NREPP - National Registry of Evidence-based Programs and Practices - http://www.oasas.state.ny.us/prevention/nrepp.cfm
The Joanna Briggs Institute - International Collaborative on Evidence-based Practice in Nursing - http://www.joannabriggs.edu.au/about/home.php
Evidenced Based Practice for Public Health - http://library.umassmed.edu/ebpph/

It can also be done at a low level by reading magazines such as New Scientist or Scientific American and articles therein will almost always have further references to published papers or studies. One must not forget the internet though much more caution is needed there unless you are sure the site is authoritative such as the ones mentioned above. The following example illustrate how one keeps up in a general sense and are selected from New Scientist and they are illustrative not definitive but if you read the articles you will be given journal references (such as Critical Care Medicine). I have selected two examples, the first challenges received wisdom and the second is based on a chance discovery but the stakes are high if you get it wrong but that is how medicine progresses, facing the challenges, imagination and persistence.

High Tech intensive care doing more harm than good NS Volume 207 No 2773 August 2010 pp46-49

The article is based on an interview with Mervyn Singer, director of the Bloomsbury Institute Centre for Intensive Care Medicine, University College London. NOTHING epitomises cutting-edge medicine so much as a modern intensive care unit. Among the serried ranks of shiny chrome and plastic surrounding each bed are machines to ventilate the lungs and keep failing kidneys functioning, devices to deliver drugs intravenously and supply sedatives, tubes to get food into a patient and waste out, and countless gizmos to monitor blood composition, heart rate, pulse and other physiological indicators. One might expect Singer to wax lyrical about the wonders of medical technology. Instead, he has this to say: "Virtually all the advances in intensive care in the past 10 years have involved doing less to the patient." And he goes further, arguing provocatively that modern critical care interferes with the body's natural protective mechanisms - that patients often survive in spite of medical interventions rather than because of them. Taking an evolutionary perspective, our immune system can fight off infections, our blood clots so that we don't bleed to death with every cut, tissues regenerate and bone fractures heal, if imperfectly, over time. The article then goes on to describe a new critical care model where the traditional one assumed that the body’s acute response following trauma leads to a lack of oxygen and cell death, but according to Singer’s theory, the body’s first response is a ‘fight mode’, followed by a shutdown stage that is a last ditch attempt at preservation and recovery.

Antibody cuts brain damage in strokes NS Volume 207 No 2768 July 2010 pp8
The discovery of an antibody that binds to certain brain receptors could reduce the side effects of a common stroke drug and buy additional time in which to use it. The preferred treatment for ischaemic stroke, in which a blood clot cuts off the blood supply to brain tissue, is a drug called rtPA, which dissolves the clot. However, that drug has to be given within the first few of hours of a stroke; otherwise the risks of treatment outweigh the benefits. Dissolving the clot can lead to a sudden rise in blood pressure, increasing the chance that a blood vessel will rupture and bleed into the brain. Only 5 to 10 per cent of people who suffer a stroke make it to hospital early enough to be treated with rtPA the rest are given drugs that do not destroy the initial clot but reduce the chance of further clots forming. Now a startling and completely unexpected discovery by Denis Vivien (University of Caen) has put a different perspective on this relatively simple picture: rtPA is actually released by brain cells. In small quantities, rtPA binds to brain-cell receptors for a chemical called NMDA. This triggers a short-lived influx of calcium, enhancing learning and memory. But damaged neurons release rtPA in large quantities, and this can cause neighbouring neurons to die. High levels of rtPA can also damage the blood-brain barrier, which may explain why the drug sometimes triggers dangerous bleeding.

Vivien has also developed an antibody that could overcome these problems. It stops rtPA from binding to the NMDA receptors, blocking its negative effects. "With the antibody, we completely prevent the deleterious effect of rtPA and we can increase the time window during which it can be given. If the results are replicated in humans, this might mean that the antibody could be given on its own, even before a stroke sufferer reaches hospital. The antibody could also make it safer to administer rtPA for a much longer period, vastly increasing the number of people who could benefit from it. What's more, because the antibody blocks the effects of the rtPA being released by damaged brain cells, the treatment might benefit people who have had a haemorrhagic stroke. "We can postulate that maybe all stroke patients could benefit," says Vivien, who is now working with a pharmaceutical company to take the antibody into clinical trials.
Most of what you read here can be found in Allen Rubin’s book “Practitioner’s Guide to using Research for Evidence-Based Practice” published by Wiley ISBN 9780470 136652 pp3-18
 
Randomization - This is a difficult idea to define because paradoxically if you can define it precisely you have in effect systemised it. However, the idea implies unpredictability or we might say no detectable pattern. In clinical trials we want to remove selection bias but the difficulty is to find a way of selecting samples randomly and it is much harder than one might think and the literature is replete with failed studies because the method of randomization was poor implying bias in sample selection therefore results cannot be trusted. Goldacre states that it is known from meta-analysis that dodgy methods of randomisation can overestimate treatment effects by 41%.

The idea of a medical controlled trial goes back to 1025AD and the brilliant work of Avicenna (Abu Ali Sina) in his famous work "The Cannon of Medicine". Goldacre though suggests that the first recorded randomised trial was carried out in the 17th century by John Baptista van Helmont who challenged the &#8216;theory' of the day and proposed a trial. To avoid any charge of cheating he divided the sample into two by drawing lots: half the patients going to Helmont and half going to others and the research question was starkly simple, and in Helmont's words "we shall see how many funerals both of us shall have!"

There are many ways to randomise but at the heart of most methods these days is a random number generator to begin the process. There is no algorithm as such for doing this, if there were the numbers would of course then be predictable but there are computer programs. Typically, what these programs actually do is sample the electrical signals in a circuit because there are always contain natural and unpredictable variations. Using these electrical signals as a seed or source we can generate a continuous sequence of random numbers.

Why Take a Random Sample?
In medical trials the purpose is to measure in some way the effect of a treatment and be sure that what we have measured is entirely due to a particular intervention and nothing else. So randomisation is used for good reason, it ensures that the comparison groups differ only by chance at the point of treatment allocation.

Taking a Random Sample
Let us suppose you want to run an observational study on patients after cataract surgery. Scarring is a common post-operative side effect of surgery to remove cataracts and it is known that 30% need laser treatment one or two years afterwards because a membrane forms, a kind of scar tissue around the implant that gradually can obscure vision. However, let us further suppose that a new lens implant material becomes available and we wish to see if scarring is reduced. Now to do this trial you can assume the existing return rate with scarring is 30% but ONLY do that if it has been verified in clinical studies. Alternatively, just choose patients over a longer period, some will then have the old implant material and some the new and then you can compare the two data sets. Now 30% is a significant figure when you think of patient distress and although the laser treatment is simple it will still require two hospital visits; one to do carry out the procedure and one to follow up but in any case every intervention should be avoided if possible.

Define the population - start by defining the population and usually this is done by setting criteria and then estimating how many people or things that might be. For example, suppose I want to sample all patients after cataract surgery who went on to develop scar tissue that obscures vision and I estimate this to be 200 although in this case I might be able to get patient records to tell me exactly how many there are with the old implant materials and how many with the new.

Define a Sample Frame - this just means a list of some kind from which you will actually choose your patients points as usually the population is too big to study as a whole and in general a well selected sample will give you as much information anyway. In this case let us suppose I can get hold of patient lists and it might look as follows where I number each patient in the trill group (those with the new implant) from 1 to 100 (in this case) and do the same with the control group (those with the old implant). In fact as long as you number these lists systematically you can start and finish anywhere - so you could number the frame from 1 to 100, 200 to 299 or 87 to 186 etc. So I end up with two lists similar to the following.

Trial Group - 001 John Ashman, 002 Paul Brigham, .....,095 Janet Brown,...., 100 Anthony Zaccari

Control Group - 201 Lydia Taylor, etc​

For the rest of the example I will use just one of these list but the principles are just the same when you include both groups.

Decide or Calculate a Sample Size - there are many ways to do this but just for example purposes let us say that I want to choose randomly 20 patients for my research study out of the 100 I have available in my sample frame. (There are many ways of calculating a sample size and any good statistical package will have a process for doing that. There are even Iphone Apps such as Biostats Calculator at about $10 that will do all this for you as well as deal with all kinds of stats and tests.)

Defining your sample size correctly is about precision meaning how well it represents the population as a whole.

Generate Random Number - now I must generate 20 different random numbers between 1 and 100 (or between whatever systematic numbering for the frame you used). If the generator gives you the same number more than once just discard subsequent ones. Here I use the Iphone app AppBox Pro (a tool box of apps) and use the one called Random by telling it my range (1 to 100) and then pressing a click icon (or you can shake the phone) and it gives me the list one at a time.

68, 33, 61, 89, 17, 24, 73, 80, 01, 50, 85, 92, 60, 95, 37, 72, 79, 21, 28, 11 and writing them out in order for convenience:

01, 11, 17, 21, 24, 28, 33, 37, 50, 60, 61, 68, 72, 73, 79, 89, 85, 89, 92, 95

DO NOT be tempted to tamper with this list and say to yourself things like 72 and 73 cannot be 'right'. You MUST trust that the Iphone app has done its job and given you a random list. I warn you, more problems than you can imagine occur when people try to second guess these sophisticated random number generators - just trust them.

Select the Sample - now go through you sample frame selecting the patients based on these random numbers. Again BE WARNED do NOT try to second guess no matter how tempted you are. Thus we end up with our sample of 20 students.

001 John Ashman, 011 Paul Aldridge, 017 Victor Litchmore, 021 Gaetan Madhvani,....095 Janet Brown​

These are now your selected 20 sample points. If you wish and it might be wise, you can select a few more in case some refuse to take part
 
Ideally, one wants to compare an intervention with doing nothing (eg. administering a placebo) for then we can be sure that any changes are due to the intervention. However, in many cases instead of using a placebo one often compares one intervention with another because it is considered not entirely ethical to give treatment to one group and none (the placebo) to the other especially if there is some evidence of the treatments efficacy.

Now suppose the intervention is to use a checklist related to reducing infection rates in knee surgery. In this case, it is hard to see how one could administer a placebo (it need not be a pill) but we can get the same information in many cases from old patient records. One could I suppose use the check-list with one surgeon and not with another (or two different hospitals) but keep in mind that at this stage we don't know for sure it is effective; hence the trial. Interestingly a trial was carried out not just for knee surgery but for any surgery and one case was cited where the surgeon was very sceptical about the check-list idea until it was discovered because of the check-list (nurses usually call out the checks not the surgeon) that the replacement knee joint he was to use was the wrong size and instant conversion to the idea took place.

Suppose we know that the infection rate is 25% then we would hardly say let the control group be those who don't use the check-list and the trial group those that do just to check our figures as it implies possibly letting one group suffer so you can get data.So in this case we could take a random sample from clinical records where knee surgery occurred in the recent past and extract from it infection histories as well as infection control practices in use. That then is out control group and then the trial group is the one that uses the new check-list.

In terms of increased attention because of the checklist one might be 'worried' that it is reducing the post operative infection rates itself not the actual checklist and so if one is not very careful one might end up saying the check-list is more useful that it truly is and you exaggerate your findings. So in a trial one does as much as one can to make sure that IF there are any changes you can be fairly certain that they arose (remember they can be negative or positive) solely because of the intervention (the check-list). Notice in this case I suggested that we look at infection control practices. I do this NOT because I want to necessarily change them but they will allow me to moderate or qualify my results.
 
Last edited:
This is a fairly advanced sort of idea which calls on quite a lot of statistics and mathematics. The ROC was used initially to help radio/radar operators decide whether something was just noise, natural and usually small variations or whether there really was a signal there - one does not want to shoot off a missile because there just might be something there. The idea was taken up by the medical profession because typically you take a test sample from patients and you have to decide when it is significant. There is a good tutorial on this at http://www.anaesthetist.com/mnm/stats/roc/Findex.htm

Take for example PSA, well it has a range of values and there is a cut off - above X there might be cancer and below, healthy tissue. But of course human beings are biological animals so there is no exact figure for a given individual where one should start to worry or feel certain one way or the other; instead there is a range of values. Roughly speaking when the PSA test was mooted someone took a large number of measurements from a representative sample of men. When you do this you in effect get two distributions: one of healthy men and one representing those with the condition and of course these two overlap.

The reason they overlap is that some of the men the test shows as healthy have in fact the disease (false negative) and similarly some of those the test shows as having the disease, in fact do not (false positive). Therefore if we process the data with knowledge of each patient’s history one can suggest a cut off value. Usually of course one has the test and if it’s in the cut off range one is sent to a specialist for further tests and that might end as a positive or negative overall result. Two simple statistical ideas are used:

Sensitivity - how good the test is at picking out patients with the condition, so sensitivity gives you the proportion of cases picked out by the test, relative to all cases who actually have the disease usually expressed normalised between 0 and 1.

Specificity - the ability of the test to pick out patients who do NOT have the disease again expressed as fraction between 0 and 1.​

The ROC then is a graphic representation of the expected accuracy of a test in a population similar to the same tat was studies to construct the original test. I can't draw a ROC curve here (see the link above) but it is constructed by plotting sensitivity (think of it as the true positive rate) on the y-axis and specificity (think of it as false positive rate) on the x-axis and any good statistical package will do this for you - BUT you really do need to know what you are doing so take advice from statistician.

Essentially, the closer the ROC curve is to a diagonal, the less useful the test is at discriminating between the two populations. The more steeply the curve moves up and then across, the better the test, the better the overall trade-off between sensitivity and specificity. A more precise way of characterising this "closeness to the diagonal" is simply to look at the AREA under the ROC curve. The closer the area is to 0.5, the lousier the test, and the closer it is to 1.0, the better the test. Essentially, the diagonal is a line that means you might as well just toss a coin because it would be just as good as the medical test at predicting a particular condition. Finally, the optimal point in general is where the graph crosses the y-axis because that means we have a zero false positive rate.

If you find this hard to see just think of the ROC curve as residing in a square with sides 1, if you then draw a diagonal it is obvious the area under the diagonal line is 0.5 and the area of the whole square is 1. If you then think of the test as being very sensitive the curve will shoot way up above the diagonal and gradually move across hence getting closer to the square whereas if it’s not all that sensitive it will rise only slowly and be nearer the shape of the diagonal. Obviously, confronted with a ROC curve it will not be easy for you by hand to find the area under the curve and that is why you MUST use a package.

Please be aware that what I am describing here is what one does to establish cut off points. In the example given in the previous post the curve looks as if it is being used as a trauma diagnostic tool and in that setting it should ONLY be used by a competent physician and in concert with other patient data. Be aware that I am speaking mathematically here and have no competence as a physician.
 
This web site http://www.mendeley.com/ is fast becoming indispensible to academics and students alike plus it is FREE. Mendeley is a reference manager and academic social network that can help you organize your research, collaborate with others online, and discover the latest research. To quote some of its features:

Automatically generate bibliographies
Collaborate easily with other researchers online
Easily import papers from other research software
Find relevant papers based on what you're reading
Access your papers from anywhere online
Read papers on the go, with our new iPhone app​

If you are planning to write up a project, dissertation or Thesis this is an almost indispensible aid. Too often students are unclear about primary sources and think it means books. However, although some books might creep into that category it is best to think of writing that is nearer to the actual research of the authors involved. Generally, you are looking for &#8216;first hand' accounts of a particular topic that was under investigation and in the vast majority of cases that would not be a book. Typically, therefore, they are created by witnesses or sometimes recorders who directly experienced the events or conditions being documented, usually, at the time when they were occurring.

So primary sources are artefacts such as Journal articles, research reports, cases reports, government papers, conference proceedings (sometimes), autobiographies, memoirs, and oral histories recorded later. In short primary sources are characterized by their content and might be available in some original format, in microfilm/microfiche, in digital format, or in published format. All good University libraries will have Journal collections but there is huge advantage in having an online database which you can access and search anywhere any time.

One needs care and there is great reward in spending quality time in the library working through actual papers. One might contrast this with using an online source and amassing a huge collection of papers and references in quite a short time but to neglect spending the necessary, and usually considerable and often laborious time reading the papers themselves and making notes &#8211; this is a very important lesson and if you forget it sooner or later you will regret it or worse.

When you are writing something up there is a more or less standard format:
Introduction and presentation/discussion of your Research Question

Thorough review of the current literature (meaning with a 10 years bracket). You might go earlier than this for classic papers but without a review of current research you will likely fail. You must remember that by the time a particular research is written up it is already a few years after the event.

Outline you research plan/design.

Presentation of you results (don't confuse results with answering your research question) plus an evaluation of the data gathering process/method itself as it may be necessary to add qualifications on the data prior to generating your outcome (the answer to the research question.)

Generation and presentation of your outcome or one might call it the answer to your Research Question plus an evaluation of the outcome itself and how useful it might be and also to qualify if necessary that usefulness.

Conclusions, remembering that these are intended to offer some generalisation of your outcome.​
 
A good research question is central to any effect research effort - as the saying goes 'if you aim at nothing you will probably hit it. But some definitions to start with and I will take a few posts to deal with this.

Presenting Problem
It is a sad fact that often students (as well as research teams) cannot define what problem they are trying to solves with any kind of lucidity or alternatively they try to say they are solving several problem all at once. Now setting up and defining the problem is quite a challenge if its to be done usefully and thoughtfully.

The notion of problem can be defined in many way but a simple way is simply to define it as a matter of concern or debate amongst situation actors (those involved); it follows a problem is an object not an activity. The student must argue from evidence that a problem exists but it is best to end the argument with a short and lucid statement problem statement such as:

…inventory discrepancies leading to delays in surgery.
…delays and errors in generating blood requested data
…patients complaints about delays in seeing a consultant.
…lack of trust in PSA results
...dopamine level in individual cases of schizophrenia.

Common Errors in Problem Specification
Stating the problem setting not the problem, saying the problem is the name of the subject area, saying the problem is the same as the solution (very common), saying the problem is finding out how to do something, saying the problem is how to make a decision, saying the problem is that something is missing etc.

Thinking Through Your Problem
Remember, that any problem definition one constructs will not be absolute and accepted by everyone; in research this is not a problem as long as the researcher makes it clear what particular definition is being taken. Do not take this process too far and end up with either over-complicated or trivial definitions; they must be thoughtful and comprehensive. So it is recommended you start by thinking about six things where the acronym CCC-APE (stated as “triple C-APE”) is used to help you not so much define the problem but to explore it so that you indeed can write an adequate definition. Remember, the purpose of this is to 'force' you to think your problem through.

Characteristics – observable features or facets of the problem idea
Context – every problem exists in a context of some kind and it must be understood
Causes – every problem will arise due to some cause or causes
Associations – every problem will have links to other situation elements or problems
Perspective – when a problem is encountered it will always be seen from a certain perspective
Effects – say what effects ensue in the real world if the problem is not resolved.​
 
Research Outcome
Every research project has a single outcome and mostly it is a document of some kind which outlines your finding and conclusions. Many get confused here and want to say "my outcome is a cure for the common cold" but in practice what your research will do is test a possible cure and then write up your results. To muddle this up is to confuse a trial report with the pink pill you have been testing.

It is therefore important to become aware that the end of a research project has four elements and hence understand the place of what we call the 'outcome'. Briefly, the four elements are written in the following order: results, outcome, evaluation and conclusions though for convenience I will discuss the outcome last because that is the only part relevant to your research question.

Results – taken to mean the primary data as collected and processed and those results are presented as tables, charts, statistics, and so on. It is important that this is seen as a preliminary step to getting the research outcome and in general this step is relatively easy and routine.

Evaluation – this occurs after one generates the outcome and is research specific with two aspects: testing (a paper exercise) the outcome before it is used and of course before the research document is finalised and reviewing research practice for lessons learned.

Conclusions – implies that you take the results and corresponding outcome and make generalizations. One might look for originality, implications, insights, new or modified principles, limitations, new or modified theorisations, indications of best practice, lessons learned, indications of a need for further work, implication for law or standards, warnings or cautions, advice, caveats, values, ethics, factors or features including cultural ones, usage and user psychology and other things that might occur to you.​

Outcome
Once the primary data has been processed into some usable form (the results) the next step is to generate an outcome based on the processed data and that manifests itself as a document. Here is a list of possibilities that are or can be documents although not all of them are likely to be suitable in a medical project.

An Account of, Appendix, Argument, Article, Best Practice Description, Business Case, Calendar, Cartoon, Catalogue, Chart, Checklist, Collation, Colophon, Concordance, Confession, Critical Apparatus, Diagram, Dictionaries, Dossier, Emendations, Essay, Framework, Grammar, Guidelines, History, Index, Instructions, Justification, Lectionary, Lexicon, List, Map, Matrix/Table, Menu, Method, Methodology, Model, Orders, Pamphlet, Plan, Policy, Position paper, Preface, Principles, Procedure description, Process Description, Profile, Prospectus, Protocol, Recension, Recommendations, Report, Research Paper, Review, Schedule, Set of Rules, Strategy, Template, Testimony and Theory

Whatever outcome form has been chosen it will be placed in the research document as a chapter or part of a chapter. The important thing is that all these possible outcomes can be used in some way to bring about change directly or indirectly and the effects of those changes are collectively known as the target (I will say more on this later). So an outcome might be a series of actions or recommendations. Thus:

The outcome of a business case for the use of a new surgical practice and it can be used by managers to make a decision on how it might be implemented. That is the business case itself does not contain any actions but it allows other actions to occur because of its content and hence eventually bring about change based on the deployment of the new procedure.

The outcome of a chart of drug interactions with antibiotics used to treat necrotising otitis externa that can be used by consultants to prevent treatment difficulties and hence speed up treatment and prevent patient stress.​

Some final points that need to be considered with regard to stating the outcome clearly

Caution - It is vital that researchers do NOT confuse their outcome (the means to bring about change) with the target (the expected change effects). For example, one might have a theatre utilization plan (outcome) to get increased throughput (target effect). If a student is not able to make this kind of simple distinction then one must seriously consider if they are in anyway ready for work at this level.

Outcome Structure - students must know what their suggested outcome is, for examples check-list. Although not shown here each outcome such as a check-list form will have a description, structure, method of construction and purpose or usage mode. The point is that if a student says his outcome is a “Position Paper” then it is fair to expect him/her to know exactly what that is as a description, a structure, how to construct it and how it is normally used.

Qualification - Finally in every case where an outcome is stated it must be qualified. So if an outcome is a “model” then one must say what it is a model of (e.g. primary care model), if an outcome is a “review” then one must say what is being reviewed (e.g. review of laser eye surgery ) and so on.​
 
Continuing my series on forming a Research question, I comes to an area which you probably have not thought too much about or even not thought about it at all. But it is important if we are to carry out research well.

Target
Target appears in many places in a research project so it is important to be consistent and avoiding saying different things in different places as that would imply a confused and careless mind.

Target is the effect or effects you want implying that changes have taken place that were a consequence of use of your outcome. For example, a problem is observable by things such as poor productivity, low morale, high reinfection rates or clinical indecision. It follows that if we can solve/partially solve the presenting problem by creating and later using a particular outcome from our research then we would hope to see observable effects such as higher productivity, improved morale, lower reinfection rates or improved clinical decisions.

Many students find this an odd and difficult idea so I will use a very simple analogy because in essence the idea is very simple.

As an example, consider what you might do, if you were unable to get a better paid job, (problem) then you might decide to get some qualifications to enhance your CV (outcome). Notice, the enhanced CV is NOT the end effect, but once we have the enhanced CV you can use it to get a better paid job (target).

A second example, consider cases of Necrotizing Otitis Externa, an infection which can be very serious and its cause is often attributed to ear syringing in GP surgeries (problem). To deal with that you may carry out research with GPs and create a check-list (outcome) for use by GPs when assessing ear infections with the target of early diagnoses of Necrotizing Otitis Externa (target)​

In a given research project there is usually no time to demonstrate that the target effects are achieved because obviously that comes after you have completed the project to generate the outcome that will, when later used, lead to those change effects.

Common Errors - Here is a list of the most common errors. Be aware that target is a basic element and if it is muddled then it implies you do not really know the purpose of your research.

Saying that target is what you have to do to get the outcome &#8211; when this happens there is no understanding of target at all and your work as seriously deficient.

Saying that target is 1001 different things - it is sadly all too common to see students saying that their outcome will generate targets effects in every part of the Hospital and at every level; this is just thoughtless rambling.

Saying that target is what some future system will do - it is feasible to say that your outcome will be used to create a system that will have a certain function. But the function of that new system is NOT the target. The target is the effects generated when the new system goes into operation.

The target is NOT your CV, the target is getting a better paid job by means of using the CV
The target is NOT a GP checklist, the taget is early diagnoses by means of the checklist​

 
Actor
It is always useful to think about who will read ones research project outcome and that helps ensure it is written to an appropriate level and in a suitable style. Every outcome will have an actor or actors, meaning that the actor will use the outcome to bring about change and achieve a particular result or effect (the target). It is essential that the outcome and actor match each other in the sense that the actor can credibly use the outcome.

Be careful here when I say actor I don’t mean you name a person such as “Joe Brown’, you instead name a position or role so you say “IT Managers”, “General Surgeon”, “Practice Nurse” and so on. All this can be gathered together in a sentence that explains what the outcome is, who the actors is/are and how they will use the outcome to get target effects. eg

The outcome will be a report with recommendations (outcome) on the effectiveness of current audit processes (a qualification). This report will then be used by finance managers (actors) to make an informed decision on the way medical equipment requests are handled in the future and in so doing bring about change in terms of the target of reducing the time taken to generate the request data and at the same time improve its accuracy.​

Unfortunately there are host of errors in considering who is the ACTOR and they point to poor thinking skills and lack of basic knowledge. The outcome/actor linking implying actions to bring about change is of particular importance as it shows with clarity whether the problem and how it might be resolved has been thought through.

Confusing the Outcome with the Target - This means a student does not understand that to get any sort of target effect you need to use something. So for example, if you say my outcome is “increased productivity” then tacitly you are saying this will occur all on its own; but a thinking person will say that to get “increased productivity” you need, for example, the outcome of “a new training model”.

Confusing the actor with a place - Actors are people so saying that your “Position Paper” will be used by “The NHS” is absurd. The NHS cannot use anything but a person or persons in the NHS can. It is also clear in this example that a large entity such as the NHS will have 1,000s of employees so just stating that it will be used by them is so vague as to be worthless.

Outcome and Actor Don’t Match - The outcome must be able to be used by the actor. However, many students thoughtlessly just seem to write anything down and so we end up with absurdities such as a a surgical checklist to reduce post op infection rates plan will be used by the Chief Hospital Engineer.

Stating an Impossible Actor – this usually shows itself when a student suggests the actor is the Heath Minister that sort of thing or perhaps suggesting that the actors are ALL hospital mangers in the UK.

Not stating an Outcome qualification - for example stating “my outcome will be a position paper” has zero value because readers have no idea what the paper is about. Therefore, one must say something like “I will produce a position paper on the use and properties of Digital Paper for consideration by hospital IT managers who may then decide there is some potential in this technology area and hence define a full scale feasibility study…”

This needs care as we often see things that look as if they are right but are in fact wrong. For example, we commonly see an outcome as “An hospital IT strategy” but although it is qualified by saying “IT” it is still practically worthless because we then have to ask: “is it an IT strategy for technology procurement”, “is a strategy for IT deployment”, “is it a strategy for IT support” and so on.

No appropriate outcome – unfortunately often no outcome is mentioned or one that is inappropriate implying no awareness that there has to be an agent of change. That is you cannot name an actor if you have no idea what it is you are going to give them.

Stating the actor as everyone - Often there is a reasonable outcome but then it is ruined by stating the actors are: mangers, support staff, sales staff, engineers, nurses, physiotherapists, radiologists etc – that is everyone. This is most often a hopeless strategy as it is rarely the case that a given outcome can be used by everyone. For example, suppose we have an outcome of a security policy on the uses of backup software on lap tops. Now it is tempting to say the actors are everyone with a laptop but that would be a mistake as you would have no authority to tell anyone to use the policy and so the correct assignment of actors is something like “IT management” because they will endorse, implement and ensure the use of the policy.​
 
Getting your Primary Data
To get at the primary data that you need there are 6 steps and you might find it useful to use the acronym: CLAMPS (thing of it as getting a grip on your data though this is not the order we do the work in) as a way of remembering the steps. However, in the proposal it is only necessary to deal with spotlight, activity and collection protocol.

Spotlight- that is highlighting the data that you need. For example imagine going through a pile of CVs and using a highlighter pen just marking the people who come from Singapore and work in IT support service. That is you are ONLY spotlighting or highlighting the ones where you will extract primary data and ignoring everything else. Thus the spotlight does two things here it tells us the location of the data (it’s in the CVs) and what data we want to extract (profiles of IT people from Singapore).

Activity- this is the main activity used to record that primary data and together with the spotlight it then effectively become a definition of the data. For example some common activities are: Account for, Analyse, Appraise, Assess, Catalogue, Collect, Compare, Compile, Contrast, Criticise, Define, Describe, Differentiate, Discuss, Evaluate, Examine, Explain, Explore, Illustrate, Interpret, Justify, Link, Outline, Portray, Profile , Represent, Standardise, Summarise, Synthesise etc.

So putting it all together I spotlight the CVs of people who come from Singapore and work in IT support service in order to compile (activity) a job profile (primary data definition) for each person highlighted

Model or Simulate - strictly this is NOT a step that one records in the specification but its acts as a check on your Spotlight and Activity. So I recommend that you invent some data just to see that what you have said makes sense and you can write it down. So using my example I would invent a few job profiles for people who work in IT support services and by that means I can feel confident I know what I am looking for as data.

Localization – meaning where will you go to actually collect the primary data (this could be a place or it might be a log file, a pile of documents, or a person and so on). Just think, if you don’t know where the data is you cannot collect it.

Collection Protocol - what is the protocol used to collect the actual data (observation, document searching, interviews, questionnaire, experiments, data logging and so on). Please remember this is a protocol meaning you have to identify the data that you want before you collect it. Using my earlier example, my protocol was “people who come from Singapore and work in IT support service” and I compiled the profiles by using let’s say a document search.

Presentation - try to think through how this raw collection of data will be presented in your work so that people can see what you actually collected prior to it being pre-processed into a perhaps more structured form and later into your outcome; typical structured forms area: set of Descriptions, Calendar, Cartoon, Catalogue, Chart, Checklist, Collation, Concordance, Diagram, Dictionaries, Dossier, History, Index. Matrix/Table, Profile, Prospectus, Protocol, Schedule and many other possibilities but often a clear order catalogue is the most useful form.

Example - Suppose I was considering the use of Ipad for use by the nurses to record patient/drug usage in wards and since this is a very new technology, I am proposing is a position paper on this technology discussing its potential usage area activities and associated issues. Here, one might think of the problem as being about concerns as to manual methods leading to patients missing out on a particular drug to wasting money with an untried technology or worries that one might get left behind technologically so the position paper is a necessary confidence building step prior to perhaps a full scale feasibility study.​

Common Errors

Omit Spotlight and activity - It is unfortunately all too common for students to go straight to specifying how to collect the data without first defining what that data is supposed to be. It MUST be obvious to ANY thinking person that it is just completely illogical to say how you will collect something before you know what that something is. It would be a silly as turning up at my house knowing you have to collect something and deciding that a Scooter will do only to later find out when you got there is was a Grand Piano.

Commonly this error is shown by lines such as “I will use interviewing and a questionnaire to gather my data” but without telling us what the spotlight and activity are (ie. no data is defined)

Only says the Location – many students just say where the primary data is located. So we might see things like “my primary data can be found in hospital reports and statistics” – well that might be true but it is miles away from telling us what the primary data is supposed to be. This error is often compounded by listing dozens of locations so we see things like “my primary data is hospital IT strategies, network plans, training policies, help desk protocols and staff utilization factors”.

Confusion over Primary and Secondary Data – it is unfortunately all too common for a students to state that the literature review is also the primary data or state that an existing (so must be secondary) document or documents are primary data.​
 
This is my last preparatory post before I discuss the structure of the Research Question itself.

Writing a Research Question
This is intended to encapsulate your whole project idea and intention into one lucid question. Ideally one wants open questions that request information. Questions can sometimes be like commands such as "Would you pass the salt?" which looks like a question but in fact is a request or action, not for an answer. In Research however, we will only look for questions that elicit information.

The Research Question is a focal point in your work because it is the place where all the elements are brought together into a concise a lucid sentence that expresses what it is you are seeking as an outcome in your research and what new purpose it might contain as a change agent. The simplest questions implicitly or explicitly request information from a range of alternatives and these are often called bi-polar questions but more generally questions ask for information that includes explanations, explorations, description and definitions but questions generally start with an interrogative. All questions have a natural structure to them and that structure can change dramatically when you change the interrogative.

Basic Research Question Forms
It is best when attempting to construct a question to think about what sort of answer to expect &#8211; now in normal everyday life we do this instinctively. For example, you would not say "is this the right way to Pablo's restaurant" if you wanted actual directions because that question form could only give you a Y/N answer. Instead you would probably say something like "how do I get to Pablo's restaurant from here" and reasonably then you would expect an explanation. Broadly speaking there are four sorts of answer:

Bi-polar answers - Essentially questions that imply a limited range of possible answers. Typically, a bi-polar question starts with a word such as WHAT, IS, CAN or DOES.

Is it possible to sharpen this pencil? (Y/N)
Does it make sense to allow children to sharpen pencils (Y/N)
Can a blue pencil be sharpened easily (Y/N)
What is the common view of staff about using blue pencils? (Disagree, agree, etc)​

Explanatory answers &#8211; where the expected answer is an explanation and it is often in the form of a procedure or process. Typically, explanatory questions start with &#8216;HOW' or &#8216;WHY'. For example, "How can a pencil be sharpened safely by young children?"

Descriptive answers &#8211; where the expected form of answer is a description, often in the form of an evaluation. Typically, these questions start with WHAT or WHY. For example, "What is the purpose of HB0 pencils?" (a simple explanation) or "Why are HB0 pencils difficult to sharpen?" (an evaluation)

Exploratory answers &#8211; where the expected form of answer implies an exploration of something. Typically, exploratory questions start with HOW or WHY. For example, "How should we use HB1 pencils best in drawing figures?" (often an exploration leads to an explanation)​

The most common interrogative words to start questions are: what, where, would, in what way, can, is it, why, which, where, how, does, who, why and do &#8211; whatever word you use, always ask what form of answer is implied by each of them. You must be sure that whatever form you decide on as answer that you can actually construct it and when it is constructed as part of your research it is in fact useful strategically in some way. For example, suppose I decide that the form of answer I want is "A profile of the role of technological innovation in hospital success". The task you now have is to ask yourself whether you know how to express a role (write it down if you like) and whether knowing about this role will be of any use.

Research Question &#8211; Why are we asking it?
In normal everyday life questions come at us more or less all the time. Sometime we just answer them but more often than not we have a tendency to ask "why do you want to know"? It is therefore always useful when setting out your research question to ask why you asking it. That is you say to yourself, if I have the answer to this question then there will be some good Outcome because of it. Sometimes we embed in our questions why we are asking them but mostly we do not. You will see later however, that you will have to make the reason plain in the research aim so one might as well think it through at the question stage as well. In summary a good research question should be:

Useful (or you can say important) - meaning it implies change is needed to improve a situation.
Interesting - to the researchers and others.
Answerable &#8211; practically (not philosophically) this mean answerable within the time scale and constraints.​

Research Question Form of Answer
For any Research Question there will always be several possible forms of answer arising out of ones personal theory about a problem situation encapsulated in the question. Ideally one would like the research question to be worded so that ONLY one form of answer is possible and that is the one our theory suggested but often that is not easy to do so one normally has a range of options and competing theories to choose from so one looks for a form that interests you or looks to have the most utility. Do not be tempted to have multiple questions all in one sentence or look for multiple answers since it is better to focus on one significant output form. The main forms of answer to help you when considering your personal theory.

Bi-polar, interrogative: does, is, are, what, when, answer form: list of possibilities (Y/N, etc)

Explanations, interrogative: how, why, who or where, answer form: report, a model, an equation, a theory, a design, an evaluation etc

Explorations, interrogative: how, who or what, answer form: a list, explanation, a comparison matrix, a pattern, a survey report, a theory etc

Descriptions, interrogative: What, who or why, answer form: A report, a process or procedure, a model, a policy, a strategy, a theory etc
 
Last edited:
Writing a good research question is quite a difficult job so over the next few posts I will illustrate how it is done. I will not use medical examples because I don't want you getting bogged down in clinical details; I want you to see the structure involved.

1. Just to remind you, the only possible interrogative words are: whose, who, whom, what, which, where, whence, whither, when, how, why, wherefore, does/is, s/are, and can. But commonly only use: Whose, who, what, which, where, when, how, why, does/is, is/are, and can.

2. The key to getting a good formulation is in linking the outcome with the actor and here the emphasis must be on the idea of use by the actor to bring about change. Remember, the actor has to be credible; no one is going to believe you if you start writing things like "&#8230; used by central government in planning the economy&#8230;" or "exploited by all Hospital managers to gain &#8230;" So we might word that link in all sort of ways, choosing the one that make the sentence both lucid and yet natural in English and stating as clearly as you can how the actors use the outcome. So here imagine for simplicity the outcome of a "policy" (or any other outcome) then we might express this is any number of ways.

&#8230;policy adopted by IT managers for use in&#8230;.
...policy used by help desk administrators as a way of &#8230;.
&#8230;policy for the use of sales staff in formulating &#8230;
&#8230;policy exploited by typical computer user to gain&#8230;
&#8230;policy applied by project managers in their &#8230;.
&#8230;policy utilized in daily operation by Network Engineers to &#8230;
&#8230;policy enabling departmental mangers to&#8230;
&#8230;policy employed by team leaders to &#8230;.
&#8230;policy suited to the current sales staff&#8230;​

It is also possible to use other formulations, so we could just as easily write

&#8230;. so that IT managers may adopt the policy&#8230;
&#8230;. may be enforced by the IT security managers by means of a policy &#8230;

3. Research Questions have 7 features: Interrogative, Outcome, Actor, Problem, Target, Spotlight and Activity. It is vital you understand that the order in which the six features appear in the Research Question will depend on the interrogative used. If you are not careful here and just stick the 6 features anywhere, choosing any interrogative you will end up with a question that make no sense.

So for example if you use the interrogative "what" then the 7 feature will naturally fall into one order in the sentences whereas if you use "how would" or "how will or "how can" there will be a different ordering. There is no rule about this and only a careful reading of the question after you have written it will tell you if it is asking what you want. One final point is that it is possible to have some of these feature expressed implicitly, mostly this applies to the outcome but I would guard against this unless you feel it is absolutely necessary to make the question read correctly and naturally in English.

Caution &#8211; questions that start with &#8216;how can' or &#8216;how could' are always very hard to write in English, especially if you are not a native speaker and almost always they lack real thought and end up being trivial ones like those below.

"How can I use the instructions to build my Lego Model" or "How can driving lessons help me get a driving licence" or In what way can possession of a car help with my transportation needs".​

Here is a simple example, don't worry if you do not understand what a TRPS centre is but instead concentrate on seeing how all the components are present.

What (Interrogative) IT centre management policy (outcome) can be defined by analyzing (activity) TRPS centre practices, technology and APCO international standards (spotlight) and then used by IT managers and centre administrators (actors) to deal with poor centre performance (problem) in order to give the public a high level of assurance regarding their possible medical emergency needs (target)?​
 
Last edited:
This is a summary of terms used in Research Methods and in particular in the Research Question - these definitions are not universal but the underlying ideas are so DO NOT assume that you already know what these mean else you are likely to get into considerable difficulties.

Problem &#8211; this must define a single core problem for which you are going to find a solution route

Target &#8211; these are the effects that will be evident in the real world if the problem can be solved. It is permissible to list more than one effect but it is best to look for the principal one.

Outcome &#8211; the object you will generate as the final product of your research project. Possible outcomes are characterised by nouns so might be: models, frameworks, policies, strategies, position papers, reviews, procedure description, best practice descriptions, dictionaries, lexicons, concordances, protocols, dossier, diagrams, charts, plans, etc.

Actor - It is normal when you define your outcome to say who the actor or actors are (meaning persons) who will use your defined outcome to bring about change by using your outcome leading to the target effects.

Thinking &#8211; it is important to be aware of how you think about the problem because that will help you decide what data is needed. In simple terms; you may have a theory that you want to test so the work is deductive, consequently you only define data that serves to test the theory. Alternatively, you may have no fixed views and you will be inductive and draw inferences from the data when you get it so in a sense you would more or less guess what data might be useful.

Activity and Data Spotlight &#8211; focusing on exactly the primary data that you need and nothing else. There are two parts, Activity (or Method), how you get the data: account for, analyses, collate, appraise and so on and the spotlight: the place where the data can be found.

Research Question &#8211; this is intended to be a lucid question that connects the various features and expresses the direction of your research and summarises your whole project. Research Questions have 7 features:Interrogative, Activity/Method, Problem, Actor, Spotlight, Target, Outcome and you might find it useful to remember them by using the word "IMPASTO" meaning way or approach BUT note the question is NOT necessarily or even usually written out in this order. The correct order of these features in a sentence depends almost entirely on the interrogative if you are to produce a valid sentence in English. Possible interrogatives are: Whose, who, whom, what, which, where, whence, whither, when, how, why, wherefore, does/is, s/are, and can.

Research Style &#8211; either quantitative; meaning mostly numbers make up the data, the intention is to process that data in order to make predictions and such studies are often deductive in nature. Alternatively, the data may be largely text and so qualitative in nature and such studies are inductive, designed to look for understanding of a situation or phenomena.

Study Type[/Q] broadly, there are two types: the first is interventionist when you make a change in a situation and the look for its consequences and the second is observatory when you simply record what is going on.

Research Method &#8211; method selection depends on many factors: context, time available, skills available, practicalities, access, reason for the study, what kind of outcome you want, cost, nature of the study quantitative or qualitative, scale, control, sensitivity of the data, etc. Basic purpose of any study is assumed to provide as an outcome one of the following forms: express an understanding, an exploration, a description, an explanation, an improvement suggestion, build something or prove something. Common Methods and typical uses are:

Case Studies &#8211; useful when trying to understand a situation or practice
Vignettes - useful for exploring a situation in order to illustrate its major features
Action Research &#8211; useful when it is desirable to improve a situation by working within it
Experiments &#8211; useful when one is trying to prove or more usually indicate the truth of some proposition
Quasi-Experiments &#8211; as for experiments but the experiment can only be simulated
Surveys &#8211; useful when trying to describe a situation or effect
Biographies/History &#8211; useful when one wants to explore a situation in order to replicates it or improve it
Grounded Theory &#8211; useful when the area under study is barely understood but needs to be explored
Ethnography &#8211; useful when one wants to describe a situation of some kind involving behaviour
Requirements Gathering &#8211; useful when one wishes to build a real world object
Population &#8211; the set of people or things from which you will derive your data. You must try to be as specific as you can and also attempt to estimate the number of people or things involved.

Independence &#8211; often with sample data when we analyze it for some feature or other we attempt to see if our findings have some significance but it is often forgotten that statistical tests of significance rely on the sample points being independent of each other. This is often hard to avoid because clusters will inevitable appear and the bigger the sample the more likely it is. It is possible mathematically to correct for clustering but at the same time you greatly reduce the significance of the results but that is better than reporting erroneous findings.​

Sample Frame &#8211; the mechanism you will use to select sample points. Ideally, it would be a list of some kind and then by a process (ideally randomised process) select sample points but one must be assured the sample points are independent.

Sample Selection &#8211; with an acceptable sample frame you need a rational way of selecting a sample of the size you calculate. Typical ways of sampling are: random, Systematic, Stratified, Cluster, Stage, Convenience, Voluntary, Quota, Purposive, Dimensional, Snowball, Event and Time sampling. Some of these will imply that your sample points are not necessarily independent such as cluster sampling.

Sample Size &#8211; it is always necessary to calculate statistically a sample size and there are many formulae for doing this and almost always they are based on what you expected prevalence of the kind of thing you are looking for.

Collection Protocol &#8211; the means by which the classified (data that tells you who or what you are to collect from) primary data is collected: interview, questionnaire, observation, role playing, seminar, focus groups, document or record searching

Pre-Processing &#8211; describes how the primary data in its raw collected form is structured into a suitable entity

Outcome Processing &#8211; describes how the structured primary data collection is used to generate the intended outcome.
 
Last edited:
Before I give some further examples of well-formed Research Questions it is as well to just look at two critical elements. Please be aware that these are not terms that are universally accepted but that in a way is not important as long as you grasp the idea.

Spotlight - essentially, the data spotlight does two things: it points at or illuminates the primary data and where it might be found (though in practice the location is often omitted as being implicitly understood).

Activity &#8211; this is the activity that tells you how to extract the data from where you have it spotlighted.​

Example 1. .. to account for (activity) hospital inventory discrepancies caused by current procedures (spotlight) &#8230;.. Where the activity &#8216;account for..' means to explain and clarify something by giving reasons and you can see this is appropriate in this case.

So here I am spotlighting inventory discrepancies but I am not being precise as to location or where I go to get them. Here my primary data would end up as a set of account for's (descriptions if you like) for each inventory discrepancy in my sample

Example 2. .. to appraise (activity) the staff utilization practices with regard to IT technicians (spotlight) &#8230;..

So here I am spotlighting staff utilization practices and the location is IT technicians (where I go to get the data). So here my primary data would end up as a set of appraisals of utilization practices.

Example 3. .. to catalogue (activity) the medical stress effects imposed on tele-workers (spotlight) in a fast moving software development market&#8230;.. Where the activity &#8216;catalogue' means to create an ordered collection of some sort where there is a logical order and the essence of the task is to enumerate and describe.

So here I am spotlighting stress effects and the location is tele-workers in a fast moving.... (where I go to get them). So here my primary data would end up as a catalogue of stress effects found in my sample of staff who I looked at.

Finally, it is common to find researchers having difficulty finding a wording for the activity, to help you here is a sample of verbs you can use

Account for , Analyse, Appraise, Assess, Catalogue, Collect, Compare, Compile, Contrast, Criticise, Define, Describe, Differentiate, Discuss, Evaluate, Examine, Explain, Explore, Illustrate, Interpret, Justify, Link, Outline, Portray, Profile, Represent, Standardise, Summarise, Synthesise,&#8230;.​
 
Here is a simple example of a complete but concise description of a research Study but to understand all the terms you may need to refer to some earlier posts.

Account for - Explain and clarify something by giving reasons

Problem – hospital inventory discrepancies leading to additional costs and delivery delays.

Target – reduced inventory cost and assured delivery times to wards

Outcome – revised inventory processes model

Actor – manager or managers responsible for inventory systems

Thinking – inductive since I have no clear idea what a solution might look like.

Activity and Data Spotlight – to account for (activity) inventory discrepancies caused by current procedures (spotlight) ….

Research Type – observational since all we are doing here is recording what is currently happening.

Research Question – what (interrogative) revised inventory processes model (outcome) can be used by inventory managers (actors) to make changes to the current system in order to reduce or eliminate inventory losses (problem) so that costs are reduced and delivery delivery times assured to wards (target) by exploring and illustrating features of working practices in the current inventory system that might account for (Activity) inventory discrepancies (spotlight).

Research Method – in this case I am basically exploring this problem situation looking for illustrations of why discrepancies occur so that leads me to think that the method of Vignettes is the most suitable research model here.

Research Style – qualitative in that I will be attempting to describe the situation.

Collection Protocol – record searching coupled with interviews.

Pre-Processing – this raw primary data collection will be pre-processed in order to structure the collection into the form of a catalogue of processes description each with a weakness assessment.

Outcome Processing – the catalogue produced in the pre-processing stage will be used to generate the outcome of a set of revised inventory processes using the NHS best practice model. This outcome will then be used to reform the way the inventory is managed and in so doing allow mangers to optimize the IT based inventory management system and hence generate the target of reduced inventory losses, reduced costs and assured delivery times.​
 
Last edited:
Here is an interesting example of research study outline posted in method space. This is what the researcher said when giving a short description.

l am new into doing research as a major subject and l am having some dificultities in choosing the proper methodology for my research question. l am interested in symptom management in oncology patients. My researsh question reads as " What influences nurses decision making in managing opioid- induced constipation in oncology patients?" l have chosen qualitative method using a phenomenology approach to gather in -depth understanding of this topic with thematic analysis of data. First l will introduce a decision making pathway tool for bowel management then followed by interviewing the users (nurses) in a natural setting (CLINICAL SETTING) so as to gain their attidutes and experience into this phenomenon.

Here are some comments.
The Research Question is a little weak itself - it seems you will end up with your raw data as a set of influences, so they are basically statements. But is that your outcome, what you are really looking for? My advice would be to think first about what your final outcome will be and its corresponding actors, those who will actually use your final outcome: the thing you produce to make changes in treatments AFTER the study has been completed. So the question is would you just hand over a set of influences or would you perhaps process them into a checklist or a report or a set of recommendations and so on as your final outcome and then decide for each outcome who would be its actors, who would use it and for what purpose. The point is unless you know what the final outcome form is one can hardly decide what to collect or what to do with it afterwards.

I think you are going in the right direction. The issue from a research point of view is that you or the nurses are using a tool so it sounds as if they are not making decisions but the variance comes because they have to look at the patient and answer questions posed by the tool - well i think that is what is being done. Now if you are looking at their answers to questions the tool poses that also is fine though it sounds as if you want to find out their reasoning and that also sounds fine. Now knowing your actors should help you to decide what form your outcome should take or vice versa. What I mean here is that you will collect your data as the study progresses and then you have to organise it in some way and the final step is to generate your outcome and that outcome is what then gets used by your actors. Nor I don't know enough to say what it might be but let's be simple and say that using your data you create a position paper - that is a report which discusses and highlights any issues and charts a way forward for let's say the consultants to consider and initiate further action (in that case the consultants are your actors). Alternatively, you might create a set of recommendations to improve the tool or a training plan for nurses so that they can more accurately identify patient problem etc
 
Last edited:
Whenever you ask questions there is always the difficulty of feeling sure that the respondents are answering truthfully and not telling you what they think you want to hear or because they fear what might be done with the data. One must also consider that the question might be poorly worded or the question too difficult and again we might get unsafe results. From an ethical point of view one way of being sure that you can rely on the answers is to preserve anonymity by realizing it can be lost in any of the following 4 ways. Please be aware that if you lose anonymity your results may well be biased.

Lost at the point of collection – for example if I as your tutor send out a questionnaire at the end of a class on Research Methods asking for your opinion of the unit and ask you to send it back to me then the way you fill in the questionnaire might be biased because you know I will know who it came from.

Lost by the method of collection – for example if we collect the data by online means we would give you a password so that a given student cannot submit a questionnaire twice but that means we can or have recorded who you are on the system.

Lost at presentation of results – when the results are presented we have to be careful to remove all*identification. For example, suppose I send out a paper questionnaire and on it ask for written comments. It now only makes sense if I send the comments to interested parties and I might very well do that by sending to them copies of the questionnaire. If I have not thought about it I might do that without*removing any identification marks or codes.

Lost by classifications – suppose I decide to classify my questionnaire by ethnic origin (or any other thing*or things), then I might effectively tell whoever looks at the questionnaires who the respondent was​
 
Last edited:
We have talked a lot about how to get a good start on a research idea but at some point you have to choose a model, a method of actually doing it. There are a vast number of methods, it's almost an industry in creating them and the more complex the name the better! But it does need care else the processes become so complicated that one can lose sight of what it is you are doing. Here is a simple guide.

In research the outcome will mainly set out to do one of the following: understand something, explore something, describe something, explain something, improve something, build something or prove something. To do any of this you need a method, a best way of working and this is referred to as a Research Method.

A Research Method is a model or framework in which you set your research design – this is useful because each model will have features that suit what it is you are doing as well as perhaps suit your temperament as a researcher. Here are some very common Research Methods:

Case Studies,
Vignettes,
Action Research,
Experiments,
Quasi-Experiments,
Surveys,
Biographies/History,
Grounded Theory,
Ethnography,
Requirements Gathering​

Choosing a research method will depend on many factors and you can see from this list it is not a simple matter, so some factors to consider are:

Context, Time available, Skills available, Practicalities, Access to data, Reason for the study, What kind of outcome you want, Cost, Quantitative/qualitative, Scope and scale, Control, Sensitivity of the data and so on.

The simplest guide to choosing a method is to think about your basic intention – ask am I setting out to: understand, explore, describe, explain, improve, build or prove. Here are some methods that are well suited to particular research intentions:

Case Studies - understanding a situation
Vignettes - explaining a situation or phenomenon
Action Research - improving a situation or process
Experiments - proving a nominal theory of some kind
Surveys - describing a situation
Grounded Theory - exploring a situation or setting​

Example
Suppose my Research was about looking at the trustworthiness (problem theme) of computer users in a situation where personal data is being handled such as in a hospital pharmacy. Here we are trying explore trustworthiness and the scale is large and the data is very sensitive in terms of accuracy, potential loss or improper disclosure. I decide therefore that I need and exploratory study to try to identify key points and ideas in trustworthiness. This makes me think of Vignettes. Vignettes are like a tiny case study, an outline, a sketch, a cartoon that just illustrates ONE important point at a time so a collection of these would indicate several important aspects of Trustworthiness and those aspects could then form the basis for a more extensive study or to initiate debate about the problem theme.
 
Last edited:
Research Project &#8211; Styles of Evaluation
As one might expect there are two major styles: quantitative and qualitative.

Quantitative - finding measurable counts or amounts or more complex ones based on rules (formula if you like)

Qualitative - finding un-measurable but observable elements; broadly there are 4 elements:

Context - is the problem setting &#8216;better' because of the use of your outcome?

Representative - are the observation of some element relevant and if so how?

Richness and Depth - to know more about a few very central issues or features.

Ambiguity, Interpretation and Understanding - everyone sees the world differently and interprets what others say to gain understanding.​

Project &#8211; Evaluation
Considerable weight is given to the writing of an evaluation of how your project went and the results you obtained. It is expected to be in two parts - evaluation of practice and then of your outcome. Be careful here, outcome is not the same as results; results are simply taken to mean the data you collects whereas, outcome means what you produced after processing the collected data from the results.

Evaluate or Test your Outcome &#8211; implies mapping out what must be done to test the outcome when you finally get it but BEFORE it is used; it follows that project outcome evaluation is in most cases a paper exercise but there are two broad categories of evaluation and they are sufficient for most project outcomes and practices. In other words you ask will your outcome work, will it do what you intended. For example, suppose you generated a new model for after care in skin grafts then you must here say how you tested that model and whether it is likely to work. Two forms are possible:

Formative - implies that the evaluation is carried out before the full use of the outcome, before full implementation. In general this is the normal situation with most student projects.

Summative - implies that evaluation of the outcome is done after implementation and may use beta testing, comparative studies or patient surveys or any number of other techniques.​

Evaluate or Test your Practice &#8211; here researchers must say how they will reflect on the various process choices made; a plan for evaluating project practice must be done AFTER it has been carried out. Reflecting on practices will often uncover deep meaning about the nature, real purpose and intentions of you own deliberate actions and assumptions and these may prove uncomfortable but are a necessary learning device. For example, you will have made a design choice for a research method so here you have to reflect on that choice and see what went right and what went wrong so that lessons can be learned.

Finally, the evaluation is done BEFORE the project document is finalised by writing ones conclusions as a final task. It must be emphasised that writing your evaluation is NOT the same as writing conclusions although the one may inform the other. Conclusions are about generalising your outcome and to do that you need to think about your outcome, its evaluation coupled with your expert knowledge gained from the literature review. In both these aspects a thorough preparation from the literature is an essential step otherwise one simply does not have the requisite knowledge to be meaningful here.
 
Last edited:
Research Project &#8211; the importance of Evaluation
The importance of evaluation is that it allows you to learn lessons from your research practice and from what your research outcome as well as feel a sense of assurance that what you have produced will be able to bring about change in the way expected or required to deal with the presenting problem.

For example, with regard to a research outcome, one could be very simple in evaluation and say "will the outcome work when used by a suitable actor" and that will give you a yes/no/maybe answer but although that is useful it is not lastingly helpful in the sense that you personally have learned anything by asking that question about research that might help you in your next project.

Consider what happens when you ask the simple 'will it work' questions and let us assume you get the answer "no". Well obviously it's very good to know that but what is also of immense value is to know why you you have concluded your outcome is unlikely to work, that way you are able to learn and the same thing applies when the answer is a 'yes.' So a good focus always in evaluation is to look for explanation or discussion on the evaluative findings.

General Principles of Outcome Evaluation
In this section an attempt will be made to give an overview of the practice of evaluation. Evaluation is a potentially very difficult activity because it requires a deep sense of openness and honesty about your own work and a real desire to reflect on what you have done and what you have produced even though the activity may in some sense be painful if mistakes or poor choices are uncovered but its value is enormous.

Much of what you do in evaluation is based on the notion of values. It is not easy to define what is meant by &#8216;values' but it is generally understood to mean a belief system that underpins ones judgement - typically, our values are to do with things we want to create, sustain or improve. Such belief systems are usually based on our world-view which conditions why we think and believe as we do, how we interpret the world. Values are often supported by rules especially if we have strong values; unfortunately these can often become uncritical dogma with unthinking, unchanging rules. Essentially, we can evaluate worth over many domains as follows.

Evaluation of Change Potential &#8211; the outcome is the agent of change in the hands of the actor so think about its potential to bring about the changes being sought.

Evaluation of Worth &#8211; meaning how far have we gone by using the outcome in terms of solving the original presenting problem and was the whole exercise worth the effort?

Evaluation of Use &#8211; every project will generate an outcome and establish an appropriate actor so we need to assess how useful the outcome is in the hands of a given actor. Here one might also useful consider the support needs tied to the use of the outcome and that might cover resources as well as training.

Evaluation of Acceptability &#8211; every outcome is intended to be used by a named actor, however, one must consider the acceptability to them of what the outcome implies regarding the actions they take and whether they are willing to carry them out. In this sense there may be personal issues to do with skills, job changes, allocation of responsibilities and resources and one must never forget there may be significant ethical implication as well.

Evaluation of the Environment &#8211; every outcome will be used in a given setting and although the study itself should have considered this as part of the data gathering process it is nevertheless necessary to examine ones outcome and how well it fits with the current environment as well as any new one that it might be desirable to create.

Evaluation of Functionality &#8211; the outcome itself will in effect contain functional elements and we need to look at those and see if they were appropriate, consistent and comprehensive or if the imply the need for more resources or difficulties in implementation.

Evaluation of Scalability &#8211; the outcome itself will normally have been created with a particular and usually limited scenario in mind but here one must consider if its use can be scaled up and if so what are the benefits and problems associated with that.

Evaluation of Costs &#8211; nothing is cost free so one has to consider the cost implications of using a given outcome even when it brings obvious financial benefits. Often this type of evaluation is done by a simple review of associated resources and their cost but may extend to processes such as net present value calculations.

Evaluation of Reliability &#8211; here one must consider how confident we are in the efficacy of the outcome when put into use and in that sense is it reliable, will it work in the long run and in any scaled up situational use. In this section one might also consider the notion of safety in use as well.

Evaluation of mood &#8211; any outcome will not exist in a vacuum and associated with it will be a kind of working atmosphere and levels of trust in the actors and users and all these put together give you a sense of mood and if things are to go well we need that mood to be in some sense positive, open, warm, helpful and sustaining.
 
The General Practice of Outcome Evaluation
There are many ways we can evaluate a research outcome but the most common are given below as general guidelines, although in every case they must be supported by evidence.

Compare with existing similar products.
Compare against a standard of some kind (this may be anything not just published ones).
Compare against some defined criteria.
Compare using expert opinion.
Compare by means of simulation outcome use in the real world
Compare with defined objectives.
Compare using engineering principles: reliability, efficiency etc​
 
This section just outlines some very general methodological means of focusing on evaluation of the outcome of a research effort. That is one is trying to decide if the outcome when used by situation actor will actually work. Bear in mind evaluation means try to test your outcome before it goes for a real trial. For example, suppose you after a research effort constructed from your research data a new protocol for the elimination or reduction of cases of rotavirus in babies or young children. Well it's obvious that one would not want to go live with this until it has been fully evaluated. So here are some possible methods and of course you may decide to use more than one.

Heuristic evaluation - an evaluator's experiential reactions to use of the outcome so you ask essentially does use of this outcome “feel right”.

Actor testing - studies conducted by actors, usually in semi-realistic contexts. The aim is to see how the outcome is used and what usability or functionality issues arise.

Interviews & Questionnaires - focus Groups, customer feedback and various methods involving direct user reactions can be used to obtain various qualitative data about users' experiences with the outcome.

Trials and Simulation - using your colleagues (or a similar accessible, controllable group) to try the outcome for a period of time, before it is given to real actors.

Storyboarding – using some kind of mood boards, storyboards, cartoons, rich pictures may help you to see how your outcome will be used by actors to bring about change and what kind of change that is.

Ethnography (from the word ethnic) - the most realistic way of evaluating an outcome is to go into the place where it is intended to be used and watch real actors using it.

Breakdown Analysis - a breakdown is any incident where the actor has cause to focus on the outcome itself rather than its effect or perhaps more simply, the outcome is not having the desired effects and the actor wonders why..​
 
Practice Evaluation
An extremely important and useful part of any research effort is to make a sound plan as to how you will evaluate your practice - what you actually did. The importance stems from the simple fact that if your practice was bad or weak then the results are almost bound to be effected. It follows, that what you uncover in practice evaluation may cause you to qualify your results in some way and in a worst case scenario even reject the whole study because your methods effectively and on reflection invalidate your results or make then unreliable.

So here students must say how they will reflect on the various choices they made; a plan for evaluating project practice AFTER it has been carried out. Reflecting on practices will often uncover deep meaning about the nature, real purpose and intentions of you own deliberate (because you made choices) actions and assumptions and these may prove uncomfortable but are a necessary learning device.

Finally, the evaluation is done BEFORE the project document is finalised by writing ones conclusions as a final task. It must be emphasised that writing your evaluation is NOT the same as writing conclusions although the one may inform the other. Conclusions are about generalising your outcome and to do that you need to think about your outcome, its evaluation coupled with your expert knowledge gained from the literature review. In both these aspects a thorough preparation from the literature is an essential step otherwise one simply does not have the requisite knowledge to be meaningful here.

Evaluating & Testing Practice
Testing what you planned as tasks, methods and approach is hard work because often we don’t like critically reflecting on the way we ourselves work; but nevertheless it must be done honestly even if it proves a bit painful. But recall, in the proposal you are writing just a plan of what you will do. The actual evaluation obviously can only take place after the project outcome has been generated. There are two elements:

Basic Reflection - run through a series of general questions, writing down your thoughts; will be positive and some negative but all designed to show that learning has taken place: how well did I do it; was it successful? Could or should I have done it another way in parts or as a whole? Did I make any mistakes and were there any surprises? Did I learn anything about research? Did I properly identify constraints involved, including time management? Did I get the scale or scope wrong?

Focused Reflection - In this section you plan how you will evaluate the main elements of your research effort and again it must be done with commitment and honesty so that you can show you have learned from the experience. Here are several things you might consider.

Outcome Process Evaluation – look at your research design and the defined model or process used to take the primary data and transform it into the outcome and reflect on how well it worked. This is generally in two parts:

Pre-Processing – was this easy, were your processing and data organisational ideas right or wrong, did you have to go back to get more data, did you have all the tools you needed and were they adequate, was the data consistent and matched to the criteria you set and so on.

Outcome Processing – how easy was it to generate your intended Outcome using the models you defined or perhaps the secondary data you needed at this stage was hard to obtain. Did you follow your design here or did it prove impossible, did you have to get further help, did you misjudge what was needed and so on.​

Literature Preparation – were there omissions, looking back were there things you misunderstood, were the sources you used reliable and were they current.

Primary Data Definition – how well did you define the data, did you get good coverage of the problem area, was your definition inaccurate or vague, did you consider confidentiality?

Choices Made – Epistemological Outlook, Research Method, Research Approach and Style, data collection protocol, sample size, population, etc

Data Collection Protocol – how well did this go in practice, where your selection criteria accurate, did you get the calculated sample size, did it prove hard to use, any ethical or confidentiality issues that proved difficult and so on.

Common Errors – this is similar to what was written above in that many students will just copy these headings into their answers without a shred of understanding, contextualization or selectivity. Others will ignore this section altogether showing they did not look up any of the references.
 
There is no neat algorithm for testing/evaluation but Popper, in the "The Logic of Scientific Discovery" suggests four possible lines along which a testing theory could be carried out using a largely deductive process. Briefly:

Logical Comparison &#8211; this critically examines your outcome and asks is it consistent within itself.

Logical Form &#8211; this critically examines the form your outcome takes and asks is it in its basic character empirical, scientific or sadly tautological (the primary data had no bearing on the result).

General Comparison and Repeatability &#8211; here one might try to compare your results with others that you have noted in the literature. When you do this your main concern is to see if there has been any advance in knowledge and understanding by your work. This also means that if someone else used your research design and collected and processed their own data (or what you collected) they would get more or less the same outcome as you did. In practice what you have to do is honestly look at your design details including all the processing steps and ask if someone else used them would they, more or less, get the same result. It follows from this that data definition such as "I will collect data on IT policies or I will interview staff to get feedback etc" or processing steps that just say "I will look at (or study or analyses or examine etc) and generate my outcome" are worthless because they only say in the most vague terms what to do not how to do it.

Application &#8211; tests your outcome to see if empirically it actually works. That is you are trying to determine if your work stands up to use. Now for students this last step may be difficult because of time constraints and so you may have to be speculative or gain opinion from others on the possible efficacy of your outcome.
It may help you to recall what that the essence of good scholarship as expressed by Karl Popper is "theories become science only when they survive our most brutal attempts to falsify them. We cannot infer the truth of a theory by observation (because we cannot observe every possible occurrence), but we can demolish it by observing facts that are counter to it or by showing that no facts support it. Just as an example, some credulous people say the collapse of the Twin Towers was due to explosives, it's a theory, but it is false because there are simply no facts that support it, if there were the hundreds of structural engineers who examined the wreckage afterward would have found them.

Therefore most of what we do here is about falsification; working hard to show that your own result is indeed false if it is false by being sceptical so that various facts emerge that either support/not support what you have discovered.
 
Last edited:
In all the sections below you need to be aware of how your thinking is being driven as it is all too easy to confuse what you believe to be true with rigour. That is you must deal with the data you have and not force out of it what you want it to say or sadly in some case researchers and students ignore their data altogether and just write as results and conclusions what they think they ought to say. It is easy to become confused over terminology so you need to be aware of following ideas and recognise them no matter what exact terms are used.

Raw Data - this is your collected data in its raw state and recorded in tables, interview transcripts, videos or whatever.

Findings - data in its raw state is not much use so your first activity is to process it into a more useable form such as charts and statistics. These are what you start with for the final part of your research project, but they are NOT the project outcome or conclusions.

Outcome - this is what your create from your processed data. It might be a model, a plan, an explanation, a report, a protocol, a prediction, a verification of some theory and any number of other useful artefacts. But the outcome is specific to your data.

Evaluation - once you have your outcome you must evaluate it, test it before it is used to see if it has any value. For example, suppose you produce from your data a new hospital service model then it is obvious you need to test it before it is actually implemented and used. In this case you might test it by creating a scenario and simulate its use, you might just run a seminar to see what others feel about it etc. In general your outcome at this stage is on paper so project evaluation is a paper exercise as usually you do not have time to put it into use even as a pilot and test that way.​

Conclusions
Your results and outcome are specific to you chosen topic so in conclusions you do not just want to repeat what you have already written under results and outcome. Looking at your results is about suggesting specific implications regarding the solution to the problem you set out to solve. In conclusion you are then going to try to generalise those findings. To do this you might look at some of the ideas presented below.

New meanings, originality, implications, new or modified principles, limitations, new or modified theorisations, indications of best practice, unanswered or new questions, lessons learned, indications of a need for further work, implication for law or standards, warnings or cautions, advice, caveats, values, ethics, factors or features including cultural ones, usage and user psychology and other things that might occur to you.

The heart of the problem is how we can logically go from specific instances (your data) to reach general conclusions. How can we possibly know that what we have observed in our necessarily limited research on given objects and events be enough to enable us to figure out or derive or predict their more general properties. That is, suppose you use your primary data to build a model of human/computer hospital technology relationships. Well that is fine but that model was built using a tiny set from the possible data population so how logically can you get from there to making predication about its use in the wider world of you own hospital and elsewhere.

For example, you might have set your project in the UAE on the topic of modern hospital communication technology and generated a position paper and evaluated its content. However, in conclusions we must now ask "do any of the findings have general implications" for other hospitals and these might cover some or all of the conclusion areas listed in italic above.

One might say here that some argue that the purpose of research is not to find facts but to generate new and better questions and so one never comes to the end but always moving forward. Mostly in the above one focused on your findings/outcome but it is also worth considering the methods you used to get them. Be careful here because this must NOT end up the same as evaluation because evaluation is project specific and here is it about making generalisations and that is often not easy to do.

Te able to do any of the above requires that you have data but almost more importantly you have already generated a lucid, comprehensive and specifuc literature review. You MUST understand that no matter how good your data is you will not be able to make sense of it unless your have the necessary knowledge and skills. For example, suppose I sent you a perfectly sound set of data on transpersonal psychology or crono-creatures or angiosperms then it is obvious you cannot possibly make sense of and data unless you know a lot about that subject area.

Conclusion Strategy
It is not easy to be precise as to how to write conclusions that are generalisations of your observation and findings. However, one way is to start by using you detailed knowledge of the subject area based on your literature review headings to write topic area commentaries in an a priori manner; commonly this is focused on benefits and features. For example, you can look at international standards, government or organisational policies and do on and all these will give you comment area ideas. Once you have a commentary you can take each paragraph in turn and re-write it in the light of your findings. Suppose I carried out a survey on the use of Skype as a modern day inter hospital communication tool I might start off with this topic area commentary paragraph bellow (one of many of course).

Topic area a priori comment (taken from they Literature Review)
The only real difference is that Skype can easily handle video calls and conferencing simultaneously. If these two services are the only differences, then subscribing to Skype might be impractical. If a task can be done by one tool, there is no need to use another gadget with the same purpose. Although, video streaming makes the conversation more intimate, spending the extra amount of time because of video may be non-viable. This does not mean that Skype is a failure among users as many respondents showed high degrees of interest and enthusiasm.

The above commentary is roughly written but that does not matter as it is just a means generating comment to merge with some survey findings to become.

Final Project Document's Conclusion Version
In the survey findings one notices some let's call it confusion over having a wide range of features of widely different facets all in one package: voice, video, chat, conferencing, file transfer, multiple calls and so on. Now, one might argue that many technologies such as simple email offer similar features so an unimaginative user might ask "so what do I gain, if a task can be done by one tool is there any need to concentrate them all into one box as it were?" So on a larger scale hospital administrator might just feel this is just another "gadget" to be managed and therefore not see its true potential. For example, video streaming in skype makes the conversation is more intimate and immediate but does it do much more than that is the real question?

This does not mean that Skype is a failure among individual users, indeed, the fact that many respondents showed high degrees of interest and enthusiasm implies that Skype generates positive perceptions from many users and perhaps that is what is really needed; a sense of optimism and imagination to see how this new technology can be used for positive business and social purposes.​
 
Last edited:
Just a change in direction now and I will make several posts to do with and publications.
literature reviewing.

Types of Literature Sources
The available literature is classified broadly speaking into the two kinds described below and ideally in high level scholarly work that might appear in journals one wants to use only or mostly primary sources.

Primary Sources &#8211; the first published documents and usually this will mean journals, research papers, government or company reports that kind of thing and it is therefore not a good idea to focus too much on books in this category though tutors will normally accept them as authoritative but if you are on an advanced course always seek out the journals as a first port of call. The importance of course is that we read in context, we can see what that authors thought, what his assumptions where and how he arrived at his position. One can be really pedantic and say the primary source is the author's manuscript or autograph but we are satisfied with published sources. It will however, often be difficult to establish that something is indeed a primary source.

Secondary Sources &#8211; in almost every document you see, there will be elements attributed to other authors; these are then secondary sources and it follows that most books fall into this category. The problem comes when you want to use something that you have discovered but that discovery is itself a quotation - that is you are getting a context from the secondary source and that may be quite different from the one in the primary source. There are ways of signalling this but in general it is to be discouraged in formal research work or study beyond a first degree because you want certainty about the source as well as its correct context.​

Be careful not to confuse the above definition with those for primary and secondary data. When we talk of primary sources we are obviously referring to something that is published and exists whereas with primary data it will not exist as a collection until a researcher defines, locates and collects it.

Using Books
You should not interpret what is said here as a call to ignore books. No, books are your staring point to build up a firm foundation knowledge and especially if you are new to the area. Once you have the basic knowledge you can go on to journals and other forms of primary sources to get the very latest research and thinking in your subject area.
 
Last edited:
A literature review is a structured account of a topic area that lays the foundation for a research effort. It must be comprehensive, current and lucid. Of most importance it must be critical meaning that YOU must add comment or explanation to what you have found - in short a review is not a recitation of what has been found but and exposition of it.

It follows that from a structural point of view you need a themed list of sub-topics using headings, subheading, paragraphs, bullets, tables, diagrams and so on in order to get a coherent and lucid discourse on your chosen subject area. This is not a trivial matter and you must expect to go over it many, many times before it is completed.

A Simple Literature Review Checklist
In summary, the review is about your topic area and about you becoming sufficiently expert in it to deal with the presenting problem that you have uncovered. The intention is for you to offer a discourse that is Focused, Relevant, Authored, Measured, Evaluatory and expressed as a Dialogue. (Notice the acronym FRAMED)

Focused – this means that your whole effort is focused on the topic area and the particular aspect of it that you are pursuing. So do not be tempted to add in other things just because they might be useful, interesting, and novel or you just have nothing else to say.

Relevant – any topic area aspect will itself represent a large body of knowledge and so you must continually ask if a particular element in the knowledge domain is relevant to your particular study.

Authored - any literature review is to be written by its author. This sounds obvious but it is all too easy to fill up a review with cited quotations, paraphrases and summaries so that the ‘hand’ of the review author is not evident anywhere in the work. When this happens it is not an evaluative review at all but simple plagiarism. The author’s ‘hand’ must guide and direct the review in an evaluatory fashion so that the review is a message from the review author and not a recitation of what has been found elsewhere. Typically this is done by using your own skills and knowledge to introduce, comment, add to, modify and extrapolate from various primary sources available.

Measured – this is a matter of selecting and using the focused and relevant materials that you have found. Unfortunately, it is all too easy to pack in information in excruciatingly detail and so end up with a laboured entry that treats your readers as if they were completely ignorant of the subject area. So you need to ask honestly “is the entry a measured response to the readers information needs?”

Evaluatory – authors sift through the primary sources looking for materials to use. The essence of this sifting is an evaluatory outlook based on an awareness of your problem theme, your topic area and your own ideas. Care is needed because this process is not about searching for materials that you agree with or like in some way. Instead it is a contextualised response (based on what you already know) and that may mean you find materials that are new to you, materials that make you change your own knowledge base and even materials that completely replace what you previously thought of as solid.

Dialogue – a review is a form of argument. Good arguments are based on a strong theme and try to explain to, and convince your readers about something. So it is best if you think of it as a kind of dialogue in which you challenge them about your review theme and content.
 
References are to sources that you use in your written work whereas a bibliography is a list of sources you have identified as useful but not necessarily used. Your University/College will look carefully at any references to see if you are prepared for study in your chosen project/dissertation topic or submission to a Journal or conference. For each source you must consider its:

Usage - The basic usage strategy is:

Find – Relevant texts using a library index, the internet, online book stores and so on.

Evaluate – Once you find a possible source you must evaluate it for content, currency and relevance.

Contextualise – that is fit this new source into your personal knowledge base but at the same time make absolutely sure you have understood what the author said and the CONTEXT in which it is set.​

Cite – If you use a source it must be listed in your reference section and cited correctly in the text. But remember, you must not use a secondary reference (a quotation of a quotation) unless you are desperate. In scholarly work you are always in danger of censure of you use a quotation of a quotation because you are simply admitting to the world that you have not seen the correct context and so may well have not understood the author correctly. It is also true that many will simply regard you as lazy for not getting to the original.

Discuss – you may include something from a source in your work as a copy (quote), paraphrase or summary but in all cases you must introduce it, comment on it and cite its source.

Currency – look at the publication date and be aware that in many scientific areas information is soon dated.

Accuracy – Is the information correct? If you cannot be sure then you must not use it.

Relevance – Make sure that your sources are relevant to your project topic.

Completeness – Make sure you are looking at the final version not some draft or abstract.

Uniqueness – is the source a primary one and recall anyone can publish just about anything, especially on the Internet. But also ask if there are other sources for the material you have uncovered by looking at the list of references included.

Coverage and Range – Use your list of sub-topics to ensure that you cover all the areas required with a range of authors so that you are fully prepared. But make sure that you are not including multiple texts with essentially the same content.
Authority and Authenticity – ask “is the text authoritative” by considering the author, publisher, writing style and currency. It is also possible to use citation indexes to see how often the source has been used. In this respect general online sources such as Wikipedia are suspect and should only be used as a starting point not as a main source and NEVER be cited other than for items that are either common knowledge or obvious. There are two elements we need to be aware of:

Author – who is saying what you are interested in? This might seem simple but often with say internet sources we have no idea who the author is supposed to be and they may assume personas, lie or make false claims so one must consider the motives of those who publishing, particularly if it’s on the Internet

Content – what is being said and one needs to be very careful that you can distinguish between:​

Opinion – such material can be used and discussed freely.
Assumption – be careful, but as long as the assumptions, ones knows the limits of the knowledge
Unstated Assumption – pay careful attention as this element as it is often hard to detect.
Tendentious – when the author wants to convince you of something and will use any means to do it.
Context – be aware of the context of what you find; is it a University site, is it a manufacturer etc.
Validation – authors do not always have their materials checked by an authoritative third party​

Fact – here one needs much more care that you have the original source. Remember, facts can be quantitative data, theories and explanation but the whole notion of a fact is troublesome when used to support arguments.

Trust - in research trust nothing until you have good cause to do so. This is the opposite of what we do in our daily lives in that we tend to trust until we have reason not to.

Validity – this means that we ask is this a valid source in the sense that it was constructed in a reliable manner. Any lack of information on proof readers, editors and publishers means that mistakes are more prevalent than in print and therefore increased scope for innocent error and for outright deception​
 
Last edited:
Just for a while to interrupt my posts about literature I'd like to recommend “Bad Pharma" By Ben Goldacre, published by Fourth Estate ISBN 978 000 7350742. The book describes how people have suffered and died because the medical profession has allowed industries' interests to trump those of patients. Readers may like to look at my posts 4 and 5 in this thread to see lists and explanations of why research goes wrong. But just to show this is not biased you might like to look at a recent paper in New Scientist No 2882, 15 September 2012 called "Is medical science built on shaky foundations" which I briefly summarize here.

More than half of biomedical findings cannot be reproduced – we urgently need a way to ensure that discoveries are properly checked. REPRODUCIBILITY is the cornerstone of science. What we hold as definitive scientific fact has been tested over and over again. Even when a fact has been tested in this way, it may still be superseded by new knowledge. One goal of scientific publication is to share results in enough detail to allow other research teams to reproduce them and build on them. However, many recent reports have raised the alarm that a shocking amount of the published literature in fields ranging from cancer biology to psychology is not reproducible.

Pharmaceuticals company Bayer, for example, recently revealed that it fails to replicate about two-thirds of published studies identifying possible drug targets (Nature Reviews Drug Discovery, vol 10, p 712). Bayer's rival Amgen reported an even higher rate of failure - over the past decade its oncology and haematology researchers could not replicate 47 of 53 highly promising results they examined (Nature, vol 483, p 531). Because drug companies scour the scientific literature for promising leads, this is a good way to estimate how much biomedical research cannot be replicated. The answer: the majority.

The reasons for this are myriad. The natural world is complex, and experimental methods do not always capture all possible variables. Funding is limited and the need to publish quickly is increasing. There are human factors, too. The pressure to cut corners, to see what one wants and believes to be true, to extract a positive outcome from months or years of hard work, and the impossibility of being an expert in all the experimental techniques required in a high-impact paper are all contributing factors.

Attempts to reproduce others' published findings can be expensive and frustrating. Drug companies have spent vast amounts of time and money trying and failing to reproduce potential drug targets reported in the scientific literature - resources that should have contributed towards curing diseases. Worse still, failed replications also quite often go unpublished, thereby leading others to repeat the same failed efforts. In the modern fast-paced world, the normal self-correcting process of science is too slow and too inefficient to continue unaided.

Thinking about the reproducibility problem, an organisation called Science Exchange could help by providing investigators with the means and incentives to obtain independent validation of their results. Here's how it works. Scientists submit studies to us that they would like to see replicated. Our independent scientific advisory board - all members of which are leaders in their fields as well as advocates on the reproducibility problem - selects studies for replication. Service providers are then selected at random to conduct the experiments, and the results are returned to the original investigators, who can then publish them in a special issue of the open-access journal PLoS ONE. We will issue a "certificate of reproducibility" for studies that are successfully replicated.

The goal is to provide a much-needed imprimatur of robustness that will ultimately increase the efficiency of research and development and bring us one step closer to perfecting the scientific method, for the benefit of all.

By Elizabeth Iorns is co-founder and CEO of Science Exchange, based in Palo Alto, California. For more information, visit reproducibilityinitiative.org
 
Last edited:
I thought we might look at Journal paper writing for a few posts. If one is going to be serious about research then sooner or later you will want to publish. A good place to begin is by going to conferences and perhaps publishing a paper as conference proceedings. But if you really are to get recognition then you must try for the Journal market where in general standards are much, much higher.

Now there are many thousands of journals and they publish huge numbers of papers so you may have to wait quite some time to get a work published and that is why it's best to be in teams and share the glory, at least when one starts out on this journey. If you do not know which Journals are revenant for your topic then ask you teachers and professors - they SHOULD know.

I've looked over the Internet and there are numerous guides on how to write papers and I have selected one which I think is quite good, easy to read and understand. I will also add notes on various sections to help, one hopes, you fully understand what is involved.

http://abacus.bates.edu/~ganderso/biology/resources/writing/HTWgeneral.html

Some General Points
Essential to all research and based on the scientific process is the reporting of results (meaning the data itself) and the outcome or outcomes you draw from that data &#8211; that is you have to try to say what it all means. Doing this is quite hard and even when you have written it all up most journals will only publish after peer review by a small group of relevant experts who recommend the publication, though usually with some revision.

All Journals will have a writing guide, which you will have to follow EXACTLY. But knowing the style required would not make you into a writer &#8211; that is something that comes with dedication and practice, not forgetting that a good command of English (or other language) and its grammar is a prerequisite. A sound way of improving your own writing skill and style is reading and critiquing other people's work and most importantly, letting others do the same for you. The guide cited above suggests there are three aspects to any good writing:

Precision - meaning the quality of being accurate and consistent. It is obvious that there is nothing worse (except perhaps fraud) than making a serious mistake because it will almost certainly be picked up by the reviewers and that tends to set alarm bells ringing in Journal editors minds and will either cause outright rejection or certainly delay in publication.

Clarity &#8211; its not easy to explain what this means but firstly we can say it must be clear to those who are expert in the area &#8211; if its not clear to them then you are in trouble. You must try to make your work free from obscurity and hopefully therefore easy to understand. This is why it is ESSENTIAL to get others to read it, and these others must be people who are not afraid to tell you the truth. In general, you want critical comments; comments that ONLY say it's fine, very good and so on are practically useless.

Economy - this simply means efficient use of resources. That is you will usually have a maximum word or page count and these cannot usually be exceeded. Language itself is the product of economy in words and languages are shaped to a very large extent by this quality. For you it means you have a certain subject area vocabulary and your own natural one, created and honed by your own lifetime reading, talking and listening habits.​
 
Last edited:
This is the second post in the series on writing for Journals (see Post 43)

Organization and Order is Vital
Its obvious that you have to sort out your materials and compress them because always there will be a word count and you want to make reading easy. Here are some ideas, though these are general and may not entirely suit your way of working. However, don't assume that YOUR way is the best and only way, take some time and look at other ways and means. You would be surprised at how many people, notably students, think there is in effect ONLY one way of working, theirs, and never look for newer and possibly better methods.

Outline - some people like to spend some time getting a good outline, setting a kind of pattern for the paper and that has some merits but because it tends to obscure the details it might not always work to your advantage. In this thread there are some outlines of research processes and you may find these helpful in forming a good outline for a publication.

List of Points &#8211; some like to use a list of points, often drawn up as a spider diagram which trend to emphasizes the links between different parts of the research. There are many software packages that help with this and it is as well to start with them early in the research itself.

Drafting &#8211; some like to draft out the paper more or less in full right from the start. Again, this has merits but may cause you to lose structure or often end up with something far too long to be useful.

Authors &#8211; if there are multiple authors then you are into negotiations as to how all this is going to be done. You must have a lead author or coordinator otherwise the whole thing will crumble into a mess. Choosing or electing the lead author can be fractious and there is no easy and automatically best way because then you end up with arguments over whose name goes first and so on. For myself, these sorts of things need to be discussed right at the beginning and I think the responsibility lays heavily on the person who had the basic idea or who obtained the funding.

Audience - usually you will be writing for your peers and assume they have at least the same knowledge, background and expertise base as you. Knowing the audience helps you decide what information to include: you may be writing for a narrow, highly technical, disciplinary journal or one that goes out to a broad range of disciplines and of course everything in between. You should also be aware that large companies scrutinize the research literature looking for useful ideas or experimental methods so it is as well to keep them in mind (think of further funding).

Prose &#8211; writing should conform to the conventions of standard written English as well as being mindful of the relevant scientific terminology; because your ideas will have little impact, no matter how good the research, if they are not communicated well. Be certain you choose your words correctly and wisely. It is said with a good deal of truth, that, when people have difficulty translating their ideas into words, they generally do not know the material as well as they think.

Be clear and concise &#8211; write briefly, say what you mean clearly, concisely and avoid embellishment with unnecessary words or phrases. Use of the active voice alone shortens sentence length considerably - use active verbs whenever possible; writing that overly uses passive verbs (is, was, has, have, had) is deadly to read and almost always results in more words than necessary to say the same thing. So avoid:

trying to impress people by using words most people have never heard of.
Use colloquial speech, slang, or "childish" words or phrases.
Using contractions: for example, "don't" must be "do not" and "isn't" must be "is not"
Being mechanical, so break any of these rules if it is warranted.
The passive voice, so the active voice "the mouse consumed oxygen at a higher rate..." rather than the passive "oxygen was consumed by the mouse at a higher rate.."​

Words - scientific terminology carries specific meaning, which you must use appropriately and consistently. The whole point of terminology (or abbreviations) is to say a lot with a few words, and this is the basis of writing economy coupled with a clear idea about audience. It is wise to obtain a list of common technical abbreviations as well as the usual terms used in writing, most often Latin words or phrases plus the conventions appropriate to the method of citation to be used.

Citations &#8211; there are several methods of citation and they will tell you how to cite anything from a quotation from a book to a TV commercial. Common methods are: APA, Chicago, Harvard and Vancouver though there are several others &#8211; in general they are not hard to learn and the details are easily found on-line. It cannot be overemphasized how high quality referencing is absolutely essential and such referencing must invariably be to the primary sources &#8211; if you can't find the primary source then I advise you NOT to use it because without the sources we have no context.

Tense - research papers are about work that has been completed; therefore use the past tense throughout your paper.

First vs. Third Person - Some disciplines and their journals have moved away from strict adherence to the third person construction, permitting limited use of the first person in published papers. However, it is likely to be best to limit your use of first person construction (I or we) particularly in its use in the results section.

Plagiarism - use of others words, ideas, images, etc. without citation is not to be tolerated and can be avoided by adequately referencing any and all information you use from other sources. There are several ways to define plagiarism, but I think it is best to define it in terms what you can count. A very common standard to use is: "wherever 6 or more consecutive words are extracted from a source they must be acknowledged and 10 consecutive words that are unacknowledged will be regarded as proof of plagiarism." Finally, be aware that being charged with plagiarism at this level is likely to end your publication career.​
 
Last edited:
This is the third in a journal writing series.(see post 43 for start)

Get Organized
Before starting to write your research paper, use whichever strategy works for you to begin to order and to organize your research points and ideas into sections.

Primary Research Literature
Before you actually begin your research you need to create an in-depth, balanced review of the primary research literature relevant to your study. If you don't do this you will simply not know enough to decide what data you want and when the research data has been collected you will not know enough to make any sense of it. The literature will form the primary basis of your introduction, discussion and conclusions.

Be warned, it is not uncommon for research to fail because the wrong data was collected or more commonly, the researchers having no idea what to do with the data once collected. Finally, when you come to generate your conclusions then a detailed and thorough knowledge of the literature is absolutely essential.

For example, suppose the purpose of your study is to compare transvaginal sonographic endometrial assessment with histology obtained by endometrial curettage in postmenopausal patients and to determine a cut-off point for endometrial thickness to reduce unnecessary diagnostic curettage for postmenopausal bleeding. Well surely, it is obvious that to attempt this without a complete study is totally foolhardy. The review will help you learn what is known about the topic you are investigating and help you avoid unnecessarily repeating work done by others.

Basic Sections
The basic sections of almost any papers are as follows.

The Introduction &#8211; this outlines the basic research theme and your mode of thinking and leads the reader into your hypothesis/research question.

Design of the Experiment &#8211; this is a potentially very difficult section because any mistakes here will have grave consequences for the research and may even totally invalidate it. It involves three major elements: deciding what data you want, deciding how it is to be collected, recorded and presented, and finally, how the data is to be processed into your research outcome.

Results Section Commonly, the results section will have text as well as tables and diagrams. Obviously, the data itself is the focus but the text is what guides and informs the reader how to look at the data. Do not be confused here, the data (your results) is NOT the same as your outcome. The outcome is what you generate by processing the results. For example, in the experiment I outline above, the outcome is perhaps a method of determining a cut-off point for endometrial thickness.

Evaluation This section is often done very badly or ignored altogether, particularly in student work. But if you are to inform you readers you must at least evaluate how the data was collected and on reflection were there any issues which need addressing to qualify your results.

Abstract and Title - The abstract is most always the last section written because it is a concise summary of the entire paper and is usually expected to be less that 300 words or so. In such an abstract one tries to include a clear statement of your aims, methods, key findings and outcome &#8211; obviously, a very difficult task. What I recommend is you use your research question, with its seven features to form the framework of this section. (see posts 24 and 25)

Prepare the Final Draft - Carefully proofread or get others to proof read what you have written. You must do all you can to ensure there are no errors in grammar, findings, citations and so on. Double-check everything, believe me there is nothing more galling than your peer reviewers finding obvious errors and you end up with a delay in publication.​
 
Last edited:
This is my 4th post in the Journal writing series (See post 43)

Peer Review
Its always a bit hard to be reviewed, especially written work where you have spent days and weeks getting it all done and someone spots an error immediately that you missed. It depends who the reviewer is but most find it hard to give negative comments or feedback. It’s therefore a good idea to do some peer reviewing yourself and see how you fair in that kind of work. The key perhaps is to:

Give positive commentary where a writer&#8232;has done well
Turning negative feedback into really useful feedback​

Start by reading the paper carefully and noting its strengths, so that the author will not lose these in any necessary revision. Remember you are looking at the paper as a whole, including its structure grammar and writing style. If you find a problem (and make sure you are right!) then write clear and helpful comments in the margins or as extra notes.

Try to avoid saying “this is unclear’, ‘this is hopeless’, ‘this is disorganized’ without saying why or at least pointing to where it might be improved. Ask yourself, “if I were reading these comments would I know what to do next?’ Now it can be tedious to write comments but just remember that the expectation is that you are conscientious about doing this well – let's face it, you would not be at all pleased if you got useless comments would you?

The basic strategy is to go beyond your initial reaction after reading a section and ask why you are reacting negatively, what is wrong or what is making you uneasy. Try not to go overboard because they you will start finding fault with everything, including the font! Typical things that may occur to you are:

Contextual issues - several topics jumbled together in one paragraph.

Logical Issues – a paragraph has a single topic treated, but is it not presented in a logically sequential manner. Logical sequence is important because the reader may be stumped by something said by the writer because as yet he does not have a crucial bit of information that comes later.

Footnotes – occasionally you will see huge footnotes and in such cases you need to encourage the writer to put some of it in the text itself otherwise your are forcing the reader to jump back and forth from the footnote to text and that is unsatisfactory. Occasionally, you may find writers who want to back up everything and this can led to a provsion of essentially unnecessary food notes.

Assumptions – occasionally writers think everyone reading the paper knows nothing about the subject in hand and so will go into excruciating details and bore everyone to death.​

Some Example of poor and useful comments.

Poor - "This section needs a lot more work."
Useful – This is a crucial part of your description of you experimental method but it is not entirely clear what you did. For example, you speak of “5 weighted treatment groups of 25” but we don’t know how you chose the groups and we have only minimal information on their composition "

Poor - "Disorganized!!!!!"
Useful - "This section discusses things that belong in your literature review (typical fecal transplant methods) and seems then to confuse what you are saying about your experimental methods.

Poor - "How are these references relevant?"
Useful - "The background and references given in section 5.2.1 don't seem directly relevant to the research question. The point is that cataract surgery complications are few and I think you are in danger of adding things in to ‘fill’ this gap especially as you are looking at Posterior Capsule Opacity, one of the most common cataract surgery complications.​
 
This is the 5th post on Journal writing.

There is an interesting and very relevant article on writing for journals in Scientific American for December 2012, Volume 307 No. 6, pages 43-49. The title of the article is "Is Drug Research Trustworthy." I strongly advise all researchers or would-be researchers to read it with care. I will briefly summarise some of it's main points.

The key point is that the pharmaceutical industry funnels money to prominent scientists who are doing research that affects their product - and nobody can stop it. For example, if findings suggest that the antiosteoporosis estogen drug called Premarin fights osteoporosis that is tantamount to encouraging millions of women to use the drug, making the lead scientist a very important person in the eyes of the relevant drug company.

Once this kind of thing happens it is not uncommon to allow the drug companies to draft research articles and the researchers to take thousands of dollars from the pharmaceutical interests that stood to gain from the research. It is not so much that one scientist takes this route but that it is becoming all to typical and no one is providing the checks and balances necessary to avoid conflicts of interest.

Obviously, not all relationships are bad and indeed without help from the pharmaceutical industry, medical researchers would not be able to turn their ideas into new drugs. However, some of these liaisons co-opt scientists into helping sell pharmaceuticals rather than generating new knowledge.

One particular area relevant to this series of postings is that of ghostwriting, meaning the pharmaceutical manufacturer drafts an article and then pays a scientist (the "guest author") an honorarium to put his or her name to it and submit it to a peer-reviewed journal. As the New England Journal of Medicine puts it "To buy a distinguished, senior academic researcher, the kind of person who speaks at meetings, who writes text books, who writes journal articles - that's worth 100,000 sales people."Peer-review journals are littered with studies showing how drug industry money is subtly undermining scientific objectivity."

The answer to conflict-of-interest problems is transparency with researchers openly declaring to their research subjects, their colleagues and anyone else affected by their work any entanglements that might compromise their objectivity.

So a final word on ghostwriting as a way of influencing scientific discourse because once a drug maker or indeed anyone can steer the way YOUR research is written, it is liable to control to large degree, how a scientific result is understood and used by clinicians.
 
Last edited:
This is post number 6 on Journal Writing, see Post 43 for a start and sources.

Writing the Abstract
Students and researchers often get into difficulty writing adequate abstracts so this is a short note on a way to constructs them.

Function &#8211; attempts to summarize in about 300 words maximum the key aspects of the entire paper. In particular, one tries to cover the question (usually just one question and indeed one might be worried about a study that claimed to answer several major questions.) and so broadly, one tries to say something about the methods used, major findings and a brief summary of your interpretations and conclusions.

The Abstract is supposed to help readers decide whether they want to read the rest of the paper and often it may be the only part available via electronic literature searches or in published abstracts. Therefore, there has to be enough information to lead someone to go further and see the whole paper &#8211; so you have to spend some time over doing this! It's often very hard to be honest with yourself, so write the abstract and then wait a few days and read it through again as an imagined researcher doing a study similar to the one you are reporting. Then ask, if this were all you could see, would you be happy with the information presented?

&#8232;Style - the Abstract is ONLY text so use concise, but complete, sentences, and get to the point quickly using the past tense. It is inadvisable in an abstract to use: too much background information, references to other literature, ellipsis (i.e., ending with ...), abbreviations or terms that may be confusing, any sort of illustration, figure, or table, or references to them.

Strategy - the Abstract, is written last since it will summarize the paper though it's a good idea to keep it in mind and add to it as you go along. I would recommend that you start by taking your Research Question itself and build your abstract from that. Here is a reminder - Research Questions have 7 features: Interrogative (I), Outcome (O), Actor (A), Problem (P), Target (T), Spotlight (S) and Activity (a) although Activity is often implied rather that stated. It is vital you understand that the order in which the six features appear in the Research Question will depend on the interrogative used. If you are not careful here and just stick the 7 main features anywhere, choosing any interrogative, you will end up with a question that makes no sense. The features are (there are several post on RPs in this thread if you want more information0:

Interrogative &#8211; what is your key interrogative word, but note that some interrogatives need to use two words if a proper question is to be formed. For example, "how" on its own will not normally make a question but when you say "how can" it is clearly a question.

Outcome &#8211; ask what sort of answer and what form it might take. Answers might be yes/no, an explanation, an exploration, a description, which may be expressed at the end of the project as a report, a model, a list and so on.

Actor &#8211; the person or persons who take the Outcome and use it to get the target effects

Problem &#8211; focus on a single significant problem and be as concise as you can.

Target &#8211; what effects will be observable and measurable in the real world if you can resolve the problem. Effects are things such as efficiency gains, provide or enable better communication, increased accuracy and so on.

Spotlight &#8211; put the spotlight on where the primary data or information needed comes from.

Activity &#8211; this is the activity use to record the data in a stated form.
Check your work - Once you have the finished abstract, check to make sure that the information in the abstract completely agrees with what is written in the paper &#8211; be warned here and don't make this mistake and end up getting the paper rejected on such a silly point.
 
Top