Psychological scientist Stephen Ceci is the H. L. Carr Chaired Professor of Developmental Psychology at Cornell University. His research focuses on a range of subjects, including cognitive development of children’s memory, intelligence, and women and academic science.
What advice, if any, would you give parents to encourage their daughters on a path to the fields of geoscience, engineering, economics, mathematics, and physical sciences (GEEMP)?
Our report touched on a couple interventions that parents could implement, one in high school and the other at the start of college.
Our analysis revealed something that’s unknown by most people. In high school, boys and girls take roughly the same number of AP courses. However, they take different ones, with boys 2 and 4 times more likely than girls to take Calculus BC and Physics Electricity/Magnetism. These courses are important to launch math-intensive majors, so the dearth of girls taking them in high school is something that parents can counter by encouraging their daughters to take them, even if it means hiring tutors to help them.
The other thing we learned is that although women are less likely to enter college with a STEM major than their male counterparts, they are actually more likely to switch into a STEM major after starting college than is true of their male counterparts. But switching into a STEM major is only feasible if they take science courses early in their college career. (There’s also some evidence that being exposed to a female instructor is especially effective in prompting women to switch into STEM majors.) So parents should monitor their daughters’ college experience to encourage early science coursework.
Your report shows that men and women encounter a level playing field in most math-intensive sciences after obtaining a PhD. But what did your research show regarding women’s experience with gender bias before obtaining a PhD?
The math-intensive majors (geosciences, engineering, economics, mathematics, computer science, and physics) are the ones in which women are in shortest supply. Unlike the life sciences, psychology, the social sciences, medicine, and veterinary science, all of which have women near parity or in excess of parity, the math-intensive fields are nowhere near parity. Yet, we didn’t find evidence of the kinds of sex bias in these fields that is often asserted. As one example, if you look at female majors in these fields, the same proportion of them go to graduate school and later to assistant professorships as is the case for the men in them. So even though there are fewer women majoring in them, once the decision is made to major in them, there is no more leakage of women than there is for men.
This does not rule out the possibility that gender bias of some type has resulted in fewer women declaring majors in these fields, but we didn’t find evidence for it if it exists. This rather surprised us, but in retrospect it should not have. I say this because women have comprised over 40% of mathematics baccalaureates for over 40 years—you have to go back to 1972 for a time when they were not at least 40% of math majors. One could imagine that early stereotypes have dissuaded women from majoring in these fields, but that’s not supported by the data.
I can’t review all of the data here but let me give a few pieces. As undergraduates, females major in life sciences more than males. If the stereotype “science is for men” is what is dissuading women from STEM fields, then we need to ask why over 60% of biology majors are women and why nearly half of math majors are women? If stereotypes are deterring women we need better evidence, such as some mechanism that depicts women as bench scientists in a biochemistry lab or as mathematicians and physicians, but not as physicists, computer scientists, or engineers. Although women go on to earn only about 30% of the doctorates in mathematics, this suggests that a critical mass are in that field who were not deterred by such stereotypes, if they exist.
In other words, women are not in short supply in all science majors or even in all math-based majors. Rather, they are in short supply in specific ones, and my colleagues and I were not persuaded that stereotypes chase women out of some lab sciences but not others, or out of some math-based endeavors but not others. The word “choice” is anathema to some in this debate, but it really is a reasonable hypothesis to explain why some sex segregation in careers exists. Currently 80% of veterinary doctorates are awarded to women and roughly half of MDs and PhDs in biology, plus 67% of psychology doctorates. These figures may change in the future just as they have changed from the past. But this has seemingly more to do with choice/preference than with biases and barriers.
Your New York Times op-ed, “Academic Science Isn’t Sexist,” has created quite a stir. Is there anything specific you wished could have been explained more in-depth in the op-ed that may have been explained in greater detail in your longer report?
As you can well imagine, it is a huge challenge to take something massive, involving hundreds of statistical tests, and reduce it to an 850-word thumbnail sketch. And this is the challenge we faced in reducing our 67-page article to the length of an op-ed. It’s impossible to list all of the test results in an op-ed, and moreover it’s not the style of an editorial to report such verbatim details; rather, the goal is to fairly describe the gist of our hundreds of analyses.
So how do you convey the gist of hundreds of analyses in a brief editorial? We did it through the use of limiting language, i.e., words such as “generally,” “usually,” “with some exceptions,” etc. Although readers of the editorial were provided a link to a free download of the full 67-page article with its hundreds of analyses, some (many) did not read it. Instead they took issue with a statement in our editorial by saying we ignored some counter-evidence or they disagreed with our interpretation of the findings. So far I have read many such claims but have not found even one of them to be a valid criticism.
For example, if we compared the salaries of male and female professors at each rank (assistant, associate, and full professor) in each of eight super fields of science, this means we performed 24 statistical tests on this issue. Suppose we found that there were no sex differences in 20 out of 24 of these contrasts. In the actual online article the reader would see which of the 24 tests were significant but the reader of the 850-word editorial would read something along the lines of “women and men are largely remunerated the same.”
Several bloggers/posters claimed we were devious or misleading because we hid from readers instances of discriminatory treatment of women in science. We don’t believe this is a fair criticism. We explicitly asserted that, in the past, sex discrimination was an important barrier to women’s advancement in academic science careers. However, we were surprised to realize how much things have changed in the past 2 decades and we alert readers of the article that if their impressions are based on findings prior to 2000 they are likely obsolete.
Further, they also stated that we were denying that truly horrible things have happened to women in academic science but we concluded that these were the experiences of the few rather than the many. Notably, women in academic science express comparable levels of job satisfaction that men express, with a couple exceptions—that are indeed exceptions.
The actual article is massive and takes time and effort to wade through it. But that’s no excuse for bloggers to take shots at our conclusions without examining the evidence we drew on. (Also see answer to the following question.)
Did you receive the response you expected to receive from your New York Times op-ed?
We fully expected that some readers would be angered by our editorial and by the article on which it is based. We had a number of conversations among ourselves to this effect, and we saw it as a price we were willing to pay. Nothing we have read has changed this belief.
One thing that has not been noted in any of the negative comments posted so far is this: our team was made up of four scholars who represent different views on this topic. Two of us (Wendy Williams and I) represent the position that the academy is largely gender-neutral, with women and men playing on a level field in grants, publications, salary, promotion, tenure, satisfaction, etc. The other 2 of us (Donna Ginther and Shulamit Kahn) are well-known for their incredibly important analyses showing that gender differences in salary and promotion cannot be explained by the usual suspects (type of institution, type of field, productivity). So we constituted a dream team of sorts, with opposing views. The challenge was to see how much we could agree on.
By the end of the 2 years it took to finish our report, we surprised ourselves in agreeing on so many things. Undoubtedly there are things we do not agree on, but when that happened they did not make it into the report, and I believe these to be few. Everything in our report was agreed to by all four of us. Why is this important? Precisely because it undermines the claims of bloggers who argue we were biased and predisposed to see things the way we did. That is most certainly untrue!