I am happy to report that I will be collaborating with Carolyn Dicey Jennings and the volunteer committee she is working with to collect and report job placement data to also develop a philosophy grad student report. Although the report is only at the very beginning stages of development–and our aim at this point to see whether a sound methodology can be developed for such a report–its prospective aim will be to report grad student evaluations of their graduate programs on issues such as:
- Overall department climate
- Climate for women
- Support from faculty
- Job placement
- Attrition
- Etc.
In addition, if feasible, the report may also include evaluations by previous grad-students, both (1) early-career philosophers who recently graduated from their program, as well as (2) individuals who left their programs without finishing their degree.
Although readers raised some concerns about such a report in the comments section of my previous post, there appears to be significant support for the proposal (see here and here), provided it is done well. Furthermore, my own first-, second-, and third-hand experiences strongly suggest the importance of such a report. Seeking a graduate degree in philosophy — especially a PhD — involves great risks and personal investment. Prospective students who want the kind of information a grad-student report might provide should, I believe, have access to such a report…again, provided the report is done well.
Our first aim, then, is to try to develop a sound methodology — and this is where we, the Cocoon, come in! One thing I think our readers can do is help us work through the development of a sound methodology together. Hammering out potential methodologies openly is, I think, not only good in terms of transparency, but also in terms of developing a sound one.
So, here is what I would like to do today. I would like to:
- Present for discussion some programmatic, possible solutions to concerns readers raised in the comments section of my previous post, and
- Solicit reader suggestions for survey items (i.e. what should be measured, and how?)
Here, in brief, are some concerns readers raised about a grad student report (both publicly and privately to me by email):
- Potential for retaliation/increasing department tensions: since the report may paint some grad programs in a negative light, there is a serious worry about retaliation, both by faculty and fellow students. As one reader noted, negative climate evaluations of their department pit faculty and grad students against one another.
- Potential for "gaming" the report: since grad-students have an interest in their program's reputation, the results of a grad student report could erroneously measure how students want their program to be perceived.
- Difficulties in measuring climate for women & underrepresented minorities: since men vastly outnumber women and underrepresented minorites in almost all departments, measures of climate could primarily reflect the interests and judgments of the dominant majority.
- Difficulties measuring overall department climate: since a department climate could be "poor" for all sorts of relatively innocuous reasons (i.e. grad students/faculty living far from campus and rarely socializing), data on climate could be highly misleading, suggesting serious problems where there are none.
- Problems classifying and perpetuating relatively innocuous climate problems: on a related note, data suggesting a department has a poor climate in relatively innocuous ways could scare away prospective students who might significantly improve those elements of the department.
- Problems with baselines: as one commenter in my previous post noted, some students may dislike their department and/or rate their department's placement rate negatively even though their department does a great job educating and placing students.
I am optimistic that all of these concerns can be sufficiently addressed. Here are some of my thoughts:
- On the report fostering retaliation/tension: The report could include items relating specifically to retaliation ("Members of my program would respond to negative data from this report in a positive, productive manner" [Agree/disagree]). This, I believe, might seriously deter faculty and grad students from engaging in it, and at the very least put public pressure on programs to avoid it/cut it out. Furthermore, it might be possible to secure departmental commitment not to retaliate, and prevent retaliation by faculty and students, relating the basis of the report. Finally, although retaliation and tensions are bad, it's not obvious to me that their existence is worse than the alternative, which is for no grad-student report to exist and serious departmental problems (which seriously affect students) to go consistently unaddressed.
- On potential for "gaming" the report: we could have some sort of reporting mechanism in the survey itself logging respondent concerns about gaming the report ("My department has exerted pressure on students to reflect positively on the department in this report" [agree/disagree]). Although grad students who might be complicit in trying to game the report might answer "disagree", it is, I think, highly unlikely that every grad student would do so, as there are unhappy students in most, if not all, programs who would be apt to give genuine answers. Significant differences across departments in "agree" rates to such a survey question (even if only a small number of students in a given department "agree" with the item) might, therefore, indicate attempts to "game" the report, suggesting to readers of the report to consider that department's scores with some suspicion, as well as deter the practice within departments.
- On difficulties measuring climate for women & underrepresented minorities: I would like to suggest that this issue can be dealt with by controlling for differences in departmental representation. For instance, even if men vastly outnumber women in a given department (say, 29 men to 3 women), the survey's item on climate for women could give male respondents and women respondents each a "50% share" in their department's score on the item (where, in the current case, responses by the 3 women in the department would comprise a full 50% of the report's average score on this item). Normalizing the data in this way would give women and men in a department equal influence over their department's score on the item, amplifying the voices of individual women in minority positions in comparison to individual men in a majority position–and, I think, rightly so.
- On problems classifying and perpetuating relatively innocuous climate issues (i.e. little socializing in the department): There should, I think, be survey items on these climate issues ("There is a lot of socializing in my department" [agree/disagree]), and, while a department's faring poorly on such items might harm the department in the short run, leading prospective students who enjoy socializing to avoid the program, (A) this information would be very much in the interest of prospective students of that sort to know, and (B) the survey could lead departments to make concerted efforts to improve their scores on these items to attract students.
- On problems with baselines: of course there are always issues with separating "perception from reality." Grad students in a given program might rate their department's placement rate poorly even though, by any objective measure, their department's placement rate is stellar. But this, I want to say, is a problem with all "perception" surveys, and it is all the more reason to present such surveys alongside more objective measures (of actual placement rates).
Anyway, these are some of my initial thoughts about these issues. I'm not saying that I'm right about all of these things, and I'm more than happy to receive/discuss critiques. Again, as far as I'm concerned, my aim–in taking part in the development of this survey–is to see whether the project can be done well, and with sound (enough) methodologies. And so I'd really like to solicit your feedback!
Leave a Reply to AmbroseCancel reply