Dissertation Defense Meeting: What Happens?

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

edieb

Senior Member
15+ Year Member
Joined
Aug 27, 2004
Messages
1,349
Reaction score
77
My advisor is very, very hands-off. Because he is very well known and helps us publish lots in grad school, he doesn't owe it to us to help us with our dissertations. For example, he only made me revise my proposal once (and only corrected a few missspelings etc) and told me my first draft of the finished product "looked fine." However, I did work on the defense for almost a year...

During my proposal, it was obvious my committee did not really read my prospectus. It was only 150 pages with references. I guess I figured they would go through it with a fine tooth comb. However, it was very obvious they went through and read a page here and there so they would have something to say during the meeting. I had one member tell me that I should lower my power from .95. However, my sample size was HUGE so lowering the power but keeping my sample size made absolutely no difference...

Even stranger, they handed me back their copies of my dissertation proposal and took NO notes during the meeting. Therefore, I know they don't remember (or care to remember) the revisions they asked for during the meeting.

My class is very small and I am the only one graduating this year with my PhD. My advisor doesn't care to help me and gets mad at me or my fellow students if we ask for any help. Thus, I am wondering if any of you know:

1 - What happens during a defense?
2 - A lot of people at other schools tell me that worrying about a FEW typos, etc is pointless. What the committee really wants is for you to be able to explain your topic and stats intelligently. Thus, they say to focus more on explaining your stats, etc more than going over your diss with a fine-tooth comb....True?
3 - A lot of people @ other schools say that their committees only do a very cursory reading of their dissertation, too. Is this true?
4 -

Members don't see this ad.
 
I'm not sure when you're defending, and it may be too late for this, but I strongly suggest that you attend (watch) another student's dissertation defense before scheduling your own. I could tell you about defenses at my institition (an hour-long presentation of key parts of the work by the student, and then two rounds of questions by the committee, and then open questions from the audience, and then everyone leaves while the committee makes a decision), but this will vary slightly everywhere.

I realize your program (clinical) is small and no one else is graduating this year, but is there a defense anywhere in the psych department before yours is scheduled? Attending a defense in social or developmental, for example, would still be beneficial.
 
Sorry to hear that it sounds like your advisor isn't...advising. I've had experiences with not getting as much feedback on things as I expected or wanted. I think they were saving it up for the first time I practiced a conference talk, which was simultaneously the greatest and worst moment of my grad school career date:)

Unfortunately, even within the department its probably going to vary widely based on who is on the committee. For example, a 150 page proposal would be on the long side for members of my lab. They aren't as short as manuscripts, but the faculty encourage us to keep them reasonable. I think since they'd prefer us spend time in the lab getting other stuff done then writing a 100 page introduction, 95 pages of which will have to be deleted anyways.

For the most part, I wouldn't expect much in the way of spelling/grammar type corrections unless one of your committee members has OCD traits, or there is something really aggregious. They might point it out so you can correct, ask you to clarify an unclear sentence, etc., but I've never heard of that being a focus of a proposal or defense. They are there as scientific reviewers, not copyeditors.

PS - Just for clarity - one thing you said confused me. You mentioned a committee member suggested lowering your power from .95 (which admittedly is high, I think around .8 is more typical) at the proposal. You said you did this, but kept the sample size the same? As far as I know, the only reason you would want to lower your power at the proposal stage would be so you could have a smaller sample and not have to run as many participants. Though I guess I could make a case for doing it if you are studying something where avoiding Type I errors is critical, so you want to set a lower alpha rate, or there are theoretical reasons where you only want to detect larger effects. In either of those cases, it seems like the committee would have spelled that out. I'm just curious since I'm still learning about power analysis so thought I'd take the opportunity to ask.
 
Last edited:
Members don't see this ad :)
they don't allow us to collect data -- they think doing so is a waste of time/not a good learning experience. hence, we have a database we basically mine to gather publications and our theses and dissertations. anyway, as far as power goes, looking at the law of large #s


POWER =


(Sample Size)(Effect Size)
______________________
(Mean Squared Within)

So decreasing power, per se, would mean that my effect size would decrease (sample size remaining static). I don't understand why I would want that
 
sorry -- meant to say that it would make no sense to me to lower power when I need to find a large effect size, not a small one, because a small one may be statistically sig but not clinically significant
 
Now I'm even more confused. We are talking about a priori power analaysis right? Not observed power? The effect size is what it is whether you have 50 or 1000 participants. Power analysis is based off what effect size you expect to have and/or want to detect.

Higher power = ability to detect smaller effect sizes, when all else is held constant. With a sample size of 10,000 even clinically meaningless effects might (and probably will) come out significant. With a sample size of 100, only values with much larger effect sizes would be significant. Same applies to power. All else equal, the more power you have, the smaller and smaller an effect you can detect.

Sorry for derailing your thread, just trying to wrap my brain around this.
 
Now I'm even more confused. We are talking about a priori power analaysis right? Not observed power? The effect size is what it is whether you have 50 or 1000 participants. Power analysis is based off what effect size you expect to have and/or want to detect.

Higher power = ability to detect smaller effect sizes, when all else is held constant. With a sample size of 10,000 even clinically meaningless effects might (and probably will) come out significant. With a sample size of 100, only values with much larger effect sizes would be significant. Same applies to power. All else equal, the more power you have, the smaller and smaller an effect you can detect.

Sorry for derailing your thread, just trying to wrap my brain around this.

yes, but changing my sample size changes the effect size I can actually detect
 
Now I'm even more confused. We are talking about a priori power analaysis right? Not observed power? The effect size is what it is whether you have 50 or 1000 participants. Power analysis is based off what effect size you expect to have and/or want to detect.

Higher power = ability to detect smaller effect sizes, when all else is held constant. With a sample size of 10,000 even clinically meaningless effects might (and probably will) come out significant. With a sample size of 100, only values with much larger effect sizes would be significant. Same applies to power. All else equal, the more power you have, the smaller and smaller an effect you can detect.

Sorry for derailing your thread, just trying to wrap my brain around this



I had to give your comments some thought. Here is what I think is going on and tell me if this makes sense to you: It is an a priori power analysis. One committee member suggested I lower my power which, like you said, would only allow me to detect higher effect sizes (all else constant) Another committee member suggested I keep in mind clinical signicance versus statistical significance. A third member thought that small effect sizes were alright and that I should solely pay attention to statisitical significance and not clinical signicance because this is a pilot study and any significance would be importan. additionally, because the literature suggests that direct observation of the patient is the way to diagnoses psychiatric disorders in this population (NOT measures and tests because they lead to underdiagnosis of psych disorders as compared to observation), and I am using measures and tests to screen for psych disorders, a small effect size likely belies the true strength of association between autism spectrum disorder and co-morbid psychiatric disorders (I am looking at whether a dx of an autism spectrum disorder is associated with higher levels of psychiatric disorders than with a sole diagnosis of intellectual disability)
 
Last edited:
Okay, that makes sense now. Member #3 seems to disagree with the other two, but they are all valid points. Though I think #3 may be over-stating their case - I don't think underdiagnosis would necessarily reduce the strength of the association. It certainly COULD, but it depends on a number of factors. Certainly a continuous self-report measure with a reasonable distribution could have WAY more power to detect small effects than a dichotomous diagnosis based on behavioral observation.

I guess what I'm still unclear on is what you are supposed to do about it in an archival analysis. Once the data has been collected, I don't think there is anything you CAN do to change your power during analysis, other than using a more stringent alpha.
 
Another point of contention among my committee members is that some of them think I should run a chi square test of ind on the nominal data to ascertain whether the presence or absence of an autism spectrum disorder is associated with the presence of individual psychiatric disorders per the scale I am using. They then want me to use the psychiatric disorders with significant elevations in my MANOVA as my DV. However, because MANOVA already takes all the DVs and combines them into one giant DV and maximizes group differences, think the chi square is pointless. I ran the chi square just to placate them but, in reality, have not found any research which has done this and see little rationale for it...
 
Top