By: Jeremy Teitelbaum
I remember when, relatively early in my academic career, I was offered the opportunity to serve on a search committee charged with selecting a new faculty member for the University of Illinois at Chicago (UIC) math department of which I was a junior member. While this task meant a lot of extra work – I recall that we received many hundreds of applications for just one or two positions, each of which needed to be carefully read and the most interesting of which needed to be extensively discussed – I was excited at the opportunity to participate. Finally, after suffering through the process of applying for college, graduate school, a post-doctoral position, and a faculty position, I would get to be one of those faceless people on the other side of the evaluation process.
As I set out to read the applicants’ CVs and cover letters, I was confident in my ability to reach fair conclusions, sure that I was free from prejudice based on gender or race, and committed to the principle of “hiring the best candidate.” I knew that I would reach my decisions by a careful weighing of each candidate’s publication record, supplemented by the considered opinions of experts in the field as reported in letters of recommendation and after discussion with my similarly well-informed colleagues on the search committee.
I look back on that young, naive, and ill-informed version of myself with, alternately, amusement and horror.
My confidence in my “natural ability” to evaluate people was profoundly shaken when some of my colleagues at UIC, in pursuit of an NSF ADVANCE grant aimed at transforming the climate for women in science at the university, sat me down and made me read some of the literature on implicit bias.
Two articles, in particular, had a profound impact on me. The first of these is titled “Nepotism and sexism in peer-review,” written by Wenneras and Wold, and published in Nature – one of the most prestigious of all scientific publications. This paper analyzes the results of a peer-reviewed competition for a prestigious Swedish postdoctoral fellowship. The authors found that objective measures of scientific productivity – papers published, papers published as first author, quality of placement of journal articles, and so forth – were accurate predictors of how the peer-review committee ranked men in the competition. They were much less accurate predictors of how the committee ranked women. Something happened in the committee to make the concrete achievements of women count for less than those of men. The authors’ methodology for reaching this conclusion was simple and transparent – and the results, to me, shocking.
The second article to shake my confidence in my ability to make fair evaluations is entitled “Exploring the color of glass: letters of recommendation for female and male medical faculty” by Trix and Psenka, published in the journal Discourse and Society. This article presents an analysis of actual letters of recommendation written for individuals applying for jobs as faculty members in medical schools, comparing the language used to describe men and women. The article demonstrates how the language in the articles tends to “portray women as teachers and students, and men as researchers and professionals.”
Trix and Pskenka’s conclusion may sound abstract, but in the course of my service on search and promotion committees I have read many, many letters of recommendation, and the features that Trix and Psenka point out in their article aren’t subtle – they’re obvious and they recur over and over. The supposedly fair and impartial evaluations offered by the letters of recommendation are, in fact, mixed up with all kinds of linguistic constructions that, though unintentional and invisible to the authors of the letters, act to undermine the accomplishments of women.
These two articles treat gender bias, but there is ample evidence of similar phenomena associated with race. And while these two articles are the ones I found most influential, there is, in fact, a vast literature on bias – including contributions from UConn’s social psychology faculty – that reinforces the conclusion that all of us, men and women alike, are influenced in our ability to make fair evaluations by a whole range of social conventions.
When I add to this picture the fact that any evaluation system must consider multiple criteria, and it almost never happens that a candidate is “best” in all such criteria, I’m forced to conclude that my goal back on that first search committee to identify “the best candidate” was not only impossible, it was meaningless.
Fortunately, there is help on how to evaluate candidates, if not to identify “the best,” then at least to make clear their strengths and weaknesses so that one can make a fair and informed choice. For example, it helps to involve people with different backgrounds and points of view in the evaluation to make sure the widest range of relevant criteria are considered; to be as systematic as possible, so as to ensure that every candidate is looked at in the light of the same criteria; and to educate the people who are making the evaluations about their own limitations, so that they can consciously resist their biases and be open to a wider range of candidates.
As an academic, I am committed to upholding high standards of scholarship. As a university administrator, I am required to put the resources I control to work in the most productive possible way. As a human being, I am obligated to treat every individual fairly and to value their contributions justly. To do any of these things, I must make judgments about people and their work. The literature I’ve mentioned above has taught me that I can’t rely solely on my academic background and good intentions when making such evaluations. I must understand my own limitations and employ my conscious mind to combat them. It’s been a humbling realization.
Read more posts by Jeremy Teitelbaum, dean of the College of Liberal Arts and Sciences, on his blog.