First off, are you referring to statistical reliability, or reliability in a more general sense? Just want to make sure to clarify since statistical reliability has somewhat clearer standards. I'll assume you are referring to a more general "Can I trust what the study says" definition.
I actually sort of use both. I have a threshold below which I will deem it pure and utter garbage. Things that clearly use flawed methodology (e.g. manipulation that doesn't actually do what think think it does, that one study I read a few years back where someone used the BDI to measure the efficacy of their mood induction, etc.), try to draw conclusions that need a control group without actually having one, things that CLEARLY don't have a representative sample (you would not BELIEVE how much research is out there that was run on say, treatment-seeking individuals at clinic x, that authors seem to think generalizes to all populations.) Nothing wrong with running studies on treatment-seekers in clinics, just don't pretend its reflective of everyone
But I digress.
Once something has passed the "crap" threshold, that doesn't mean I trust it absolutely. I typically lump it into a "Needs further investigation" category, and if it seems like alot of studies are finding similar results, than I may trust those findings. After that there is a spectrum of reliability based off way too many factors to list, many of which are "soft" factors to begin with that may or may not relate or be necessary in a given study.
So really its both. I think a study needs to meet certain criteria (that vary based on the kind of research) for me to deem it anything other than a waste of time. After that, there's still a spectrum of how trustworthy something is.