I do think this would help. However, even raw data is quite easy to fake. A lot if the papers that get caught have been fairly unsophisticated attempts, but if we're just talking spreadsheets - any statistician worth their salt could back up from a desired result and run simulations til they obtain the desired effects with a realistic sample. Furthermore, given the pretty much endless ways to muck around with data I always wonder how often even two people analyzing the same raw data would arrive at the exact same answer. With journals pushing for briefer and briefer papers, not all of these details make it into the methods section and substantial variability is possible. This goes double the more complex the analysis. T-test comparing two single-item measures? Sure, that we can do. Random effects model of post-processed ERP data that underwent a spatiotemporal PCA following ICA blink correction? I'm not convinced I could replicate my own analyses if I didn't save the scripts (which while we're on the subject...should probably be included too).
I think more to the point though is...who is actually going to check these things? Journals have enough trouble getting qualified reviewers a lot of the time and re-analyzing data can take hours upon hours (if not days) of time.
I'm by no means disagreeing that this would help and fully agree that it needs to happen. However, I think its important to recognize exactly how much we need to do if we are going to effectively check work. Requiring posting of datasets is a step in the right direction....but still a VERY small one in the grand scheme of things.