For discussions sake lets assume the patient is not malingering and has not been misdiagnosed (and there therapeutic response at a lower than typical dosage can't be adequately explained by spontaneous remission or the placebo effect alone), what does that mean then? Are we talking just a simple case of everyone's an individual, factoring in things like age, sex, height, weight, etc, a metabolic process of some sort? Is it possible to test for medication efficiency at particular dosages (higher, lower, somewhere in the middle) before a patient is even given a certain medication (like does their exist a test, or is one being worked towards, whereby it could be pinpointed that 'Jo Bloggs here has X condition and based on his pre-medication tests we can determine he needs a dosage of not less than Y mgs of ABC medication', for example).
Going to try and answer this without taking you through 2 years of medical school. The short answer is: We do, but it's not how you've envisioned it. The medications you take that are FDA approved have (theoretically) already been analyzed across age, sex, weight, race, underlying disease differences for a given dose, and been tested at a wide range of doses to arrive at the treatment doses your doctor prescribes. Beyond this, we DO account for the other drugs a patient is taking and their overall medical condition when dosing any medication. Some drugs increase the rate by which certain other drugs are metabolised, others slow it down. We know this. A patient with chronic kidney disease is unable to excrete renally-cleared drugs as well as the (generally healthy) population the drug was approved in. So we have to dose it differently (decrease the dose). But that same CKD patient can take normal doses of a hepatically metabolised drug. We don't have to do specialized tests for this, we use some basic tools - getting the patient's history and physical and running very basic labs (eg. BMP/CMP) - and our knowledge of physiology, pharmacology, and experience from medical school and beyond.
More: There's usually no reason to re-invent the wheel when millions of dollars have already been spent on drug development by chasing after a gene or two when we don't fully understand how the concert of gene products influences drug metabolism and action, in part because we don't know all the players and in part because a violin alone in a room sounds different from a violin in a full orchestra. Genetic tests are expensive. We don't usually do genetic testing for medication dosing/administration for the aforementioned reason. One of the first
necessary tests that comes to come to my mind is HLA-B5701 testing for abacavir hypersensitivity, and another is HCV genotyping (but there we're testing the profile of the virus and not the patient). In Psychiatry, from my very limited experience as a 4th year medical student, you'll run genetic testing for patients who a) can afford it and b) have been nonresponders or have had weird reactions to multiple different classes of psychotropic medications. I only have limited experience in PP but at least in my VA, county, and academic private rotations I have very very rarely seen this done. There's a reason why drugs go through extensive testing before they make it to market, and why they're tested on broad populations. Your hope is that some trends in ethnicity, gender, underlying conditions emerge in the drug development process, or at worst in the few years after drug hits market, to better guide treatment. Similarly, these studies often guide dosing in the sense that investigators study drug levels in the serum and correlate with patient symptoms to figure out what that therapeutic dosage is (eg. lithium 0.8-1.2 meq/l). As I hinted to above, the recommended dosing on a medication is initially based on studies conducted in fairly healthy people. This is where experience, journal articles about how a drug performs in the field, and clinical judgment come into play, but we still largely adhere to the recommended doses.
Titrating a drug up to these levels can be frustrating for the impatient but it has several important benefits: 1) patient builds up a tolerance to the drug and therefore they can either ride it out through side effects or their body can adapt to them, 2) if the patient reacts adversely, it's at a lower dosage so hopefully the outcome isn't as severe, and 3) you can see how the patient responds to the drug and make sure they're not changing too rapidly or heading in the wrong direction. Plenty of other reasons including maximum drug absorption, half lives, excretion rates, etc.
These are very good questions. It looks like you are taking undergraduate courses right now, as you take biochemistry and learn about pharmacokinetics, some of this will become more clear to you. Likewise I'd encourage you to take a class in basic statistics and study-design because you'll learn about what is required for a drug to go from idea in a lab to a marketed product, how placebo effects are controlled, and what we look for in a study that contributes to the evidence base for medical decision making. If you're planning to go to medical school, you'll learn it all again but these are important principles to begin to understand.