Point well taken, but I said "basic" not "simple". As in univariate or multivariate regression, t-tests, hazard ratios, perhaps propensity score matching. Anything beyond that, such as Beyesian analysis, will likely only come from someone with considerable prior training or a student who finds an interesting article and tries to replicate their methods.
I'm using the two words interchangeably, but either way, the point doesn't change. Most of those are bread and butter type methods, but they require a lot of knowledge to use properly (anyone who took a stats class and has SPSS can click buttons but that doesn't mean they're doing the right thing or actually know what it means). I'm also going to assume that by "multivariate regression" you meant "multivariable regression." The former has multiple dependent/outcome variables analyzed simultaneously, while the latter has multiple independent variables with one dependent/outcome variable (which is another common mistake I've seen students and PI's make, even noted in research literature, but there is clearly a difference between multivariable and multivariate). Propensity score matching isn't really considered a basic technique and can be pretty tricky-- as with everything it has it's pitfalls and it is far from a magic wand to fix non-randomized group allocation. Bayesian statistics isn't necessarily less "basic" than the methods you're suggesting, but the most controversial and often tricky part surrounds specifying priors which is fairly subjective.
Still, coming back to something like a regression, for example, I know from personal experience, and from seeing how people without stats backgrounds publish papers, that they mainly think their work is done once they've coded the data and gotten some p-values and model-based statistics (it's not too tricky to read the statistical methods portion of a paper and discern who knows what they're doing and more often than not, when the stats are done well, the person in charge of the stats was someone with an education in biostats/stats). Outside of the papers with statisticians, few mention or are even aware of proper model building, assumption checking, model validation, diagnostics for the model such as influential observations and outliers (which isn't as simple as locating the outlier and deleting it...), failing to apply or misapplying procedures for a given situation, treating each analysis as a cook book recipe that varies little from the last study, and the list goes on...these are things that would be readily apparent to someone with a decent background in stats.
So I come back to my advice before: the best thing someone can do is to learn from vetted resources written or delivered by statisticians and biostatisticians. You can learn to generate and loosely interpret a t-test in under an hour, but to actually understand what, when, how, where, and why (all important, as well as alternatives or supplemental analysis), you need to spend a lot of time working with both neat and dirty data. Once you do this, you'll start to realize that misapplying a test or failing to verify assumptions can get you dramatically different and inappropriate conclusions. You'll notice that certain things will give you equivalent answers, some with more useful or more convenient information, such as the case of an independent t-test with two groups (special case of an ANOVA), an ANOVA with the same two groups, and a simple linear regression using a dummy variable for the same 2 groups (people are often surprised at that). Someone who does this will start to realize that always using Fisher's exact test can result in an unnecessary loss of power while always using Pearson's chi-square for independence can run into issues when expected cell counts start to dip below 5 (and will also realize when it's not material that some of the expected counts are less than 5 while in other cases a single expected count of less than 5 is a problem).
It's pretty similar to medicine in that you don't know what you don't know, and what you don't know can really cause problems.