I think there's a lot of misinformation in this thread, and a lot of confusion about biostatistics overall.
Whenever you make a study, you must have your hypothesis be the null hypothesis, meaning that the "default" is that there is no difference or no correlation between X and Y.
For instance, if you are testing a new cancer drug in a clinical trial, your null hypothesis must be that there is no difference in giving this drug or a placebo.
If there is a difference, you need to make sure it is not due to chance, which is where the P value comes in.
If I had two groups each with one patient (one got the drug and one didn't) and the patient with the drug lived and the other died, the P value would be terrible because the study is underpowered.
If you had a large trial you could have a good P value even if there is a small difference because when you have a larger population, even smaller changes in outcome are less likely to be due to chance.
If the odds ratio or relative risk crosses unity (1) for the 95% confidence interval, by definition the P value must be over 0.05 and the null hypothesis is accepted that there is no difference.
There's really no such thing as a "statistically significant lack of an association" the way that you're thinking of it. There are clinical trials that are noninferiority trials, which is the closest thing you can get.
For instance, if patients with DVTs need an anticoagulant medication (such as warfarin) and a new drug company wants to make a new drug "drug B"... you could not ethically do a randomized placebo-controlled trial because warfarin has been demonstrated to be beneficial in patients with DVTs, so a placebo would be unethical because you're potentially withholding a valuable treatment in the placebo group. However, you could do a superiority trial (where you would look for a statistically significant difference in mortality or rates of PE or so forth, and if you found a difference where the 95% CI doesn't cross 1 you've found a statistically significant change), or you could do a noninferiority trial. A noninferiority trial you'd set your inferiority margin first. For instance, I will say that warfarin causes a bleeding rate per year of, lets say, 2%. I want to make sure drug B not inferior to warfarin (causing more bleeds) so I will make a study that has enough patients (powered) to detect a 0.5% difference in bleeding with a P value of 0.05. In that case, when I do the trial, if there's less than a 0.5% change in bleeding (e.g., warfarin was 2% and drug B was 2.1%) then the P value between those two values would be over 0.05 and you could say that there's a less than 0.5% difference in bleeding between the two drugs (as long as the study was well-designed, etc.).
So to answer the question by the OP, if the 95% CI crosses 1, the P value must by definition be higher than 0.05 and there is no statistically significant difference between "X" and "Y" (e.g., warfarin and drug B). That doesn't mean it's not important though -- because in the noninferiority trial depicted above, it's important because you have shown that drug B doesn't cause more bleeding than warfarin (they have similar rates of bleeding). However, your P value will still be over 0.05, and the power of your study will tell you how small of a difference you can expect to find.