I second the idea it's a little strange to consider Mturk a bad sample source when undergrad samples are still common (or, worse, whatever happens to those studies that recruit on APA list serves for general surveys [list serve recruitment is fine if you are doing research on psychologists or students, but not if it's a broad general study]).
I've found mturk data to be better when:
1. Check with your IRB to see if you can straight up boot people out of the survey without compensation for failing an attention check item that is in the first measure, or who fail criteria (e.g., the survey is for women and they check "man" on a demographics gender item). You're required to compensate people if they choose to not to answer questions, but you aren't ethically required to compensate bots and people who are not looking at survey items or reading instructions.
2. Restrict the participation to people who have 95% approval, but NOT to masters. You need to do like a thousand tasks to be a master. Those are not average people.
3. Have your survey prevent "ballot stuffing"; i.e., the same terminal can only complete the survey once. This can be worked around, but the work around would take longer than just doing a different survey.
4. Restrict participation to the U.S. BUT remember that some folks use VPNs that can bounce off Brazil, India, etc, so IP addresses are not always reliable if you log them.
5. I have gotten much better data from very short surveys and surveys that include a qual/written component. I've actually gotten a couple pretty nice, though basic, qualitative data sets that were parts of bigger projects.
I played around on mturk for a while as a user, to see what the user experience was. You do have a hunt for a little bit to find surveys--there are SO MANY tasks up. There are also an amazing number that are restricted to only masters (requiring masters seems like asking for weird data to me).