By Arielle Keller and Isabel Low
Injustice and inequality are everywhere. Even the evidence-based world of academic science is not immune. Scientists are people, after all, and are susceptible to subjective biases that impact decision-making and behavior. Though pervasive, this injustice can be difficult to conceptualize, especially for those who have not experienced it first-hand. How do our societal stereotypes and implicit biases negatively impact scientific progress and unjustly hinder career development? We, a group of neurosciences graduate students at Stanford University, have decided to explore this question in the way we’ve been trained—with a deep dive into the data.
Let’s start by considering our assumptions. We constantly make snap judgments about the world, and these quick, effortless evaluations can be really useful in some cases. For example, the brain’s ability to use assumptions from the past can help us anticipate the motion of cars or the way certain words flow together in sentences. However, these types of automatic guesses about our worlds can really get in our way when we’re trying to make complex decisions that require us to take our time, like deciding whether we like a new acquaintance or determining whether a scientific study was done rigorously.
Implicit biases are assumptions that affect our behavior in an unconscious manner, and they often come from stereotypes in our social environments. Biased beliefs about the competence of under-represented minorities in STEM fields can lead to inequality in publishing, grants, admissions, and can even hinder the progress of science itself. By exploring the ways that underlying assumptions affect judgments in a scientific context we can ask the question: are scientists truly objective?
Figure from Moss-Racusin et al. (2012)
depictingfaculty ratings of identical research
lab manager applications, either bearing
the name "John" or "Jennifer"
Moss-Racusin et al. (2012) demonstrated that STEM faculty evaluate male and female applicants for a research manager position differently, even when application materials are identical. In this randomized, double-blind study of 127 science faculty, researchers found that an application with the name “Jennifer” was evaluated less favorably than the same exact application bearing the name “John.” Specifically, faculty rated “Jennifer” as less competent and less hirable than “John,” were less willing to offer her career mentorship, and suggested a starting salary $4000 lower on average. Both male and female faculty members exhibited this strong bias, and the effect was unrelated to the faculty members’ age, tenure status, or particular scientific field.
Media coverage of this scientific study was extensive, with articles reporting the findings in the New York Times, Discover Magazine, and even the popular Facebook page “IFL Science”. Commenters flocked to these articles to voice their reactions, which ranged broadly from sexist remarks to doubts about the scientific validity of the study to enthusiastic support for gender bias research. The authors of the study decided to conduct an evaluation of the comments they received, and published their analysis in a follow-up study (Moss-Racusin et al., 2015). Their findings showed that male commenters were more likely to react negatively to the 2012 study than female commenters were.
Figure depicting data from Handley et al. (2015)
showing ratings made by men and women of
the quality of scientific abstracts. The real abstract
reports gender bias in science, while the fake version
of the abstract had several words altered to purport
no gender bias in science.
To build upon this naturalistic study of online comments, another research team then designed an experimental test of biased reactions to evidence of bias (Handley et al., 2015). This way, the researchers could more carefully control what they were looking at (e.g., making sure that the same person doesn’t get counted twice). The abstract from the original 2012 study was given to a sample of 205 males and females, who were asked to rate the quality of the evidence. Handley et al. (2015) showed that men evaluated the evidence of bias less favorably than women did. You might think that STEM faculty, who have been trained throughout their careers to be objective thinkers, would be less susceptible to bias than faculty in other departments, but actually this isn’t the case. Using a separate sample of 205 faculty members, they found that this difference between male and female ratings was specific to STEM faculty – professors in the arts and humanities showed no gender bias in their reaction to the 2012 abstract.
The researchers then tested whether the opposite effect would hold: do females rate evidence reporting no gender bias less favorably than males? By tweaking a few words in another abstract reporting gender bias, the researchers created a fake version purporting the absence of gender bias. Indeed, females rated the fake abstract that reported no gender-bias less favorably than males and vice versa. This implies that both men and women are capable of behaving defensively when they encounter a study that doesn’t line up with their lived experience.
These findings reveal that scientists have subjective biases--as well as biased reactions to evidence of biases--that could negatively impact under-represented minorities in STEM. Importantly, both men and women have implicit biases, and people who value their objectivity the most may actually be the most likely to fall prey to these biases. There are many possible explanations for the biases revealed by these studies, including confirmation bias (being disinclined to believe evidence that goes against our prior beliefs), social identity theory (we tend to perceive our own social groups favorably), and system justification theory (privileged groups may sub-consciously seek to justify their privileged status).
How can we overcome gender biases in science and in other realms of our lives? One simple but powerful method to combat negative implicit biases is just to recognize them in ourselves. By accepting that we all have biases and by attempting to replace our snap judgments with careful consideration, we can reduce the extent to which our assumptions influence our decisions. In order to fully acknowledge our biases, however, we must be able to accept the evidence that they exist.
In our upcoming blog series, we will take a deep dive into several studies that characterize issues faced by under-represented minorities in STEM. We will engage in rigorous discussion of these works, and the implications that the data have for hiring and career progression, mentorship opportunities, support for family commitments, and workplace discrimination. We will talk about sexism, the visibility of women in academia, and our perceptions of others’ abilities. We will also review the effectiveness of different intervention approaches to directly address these issues, including a look at how virtual reality can reduce racism. Although we mainly focus on studies of gender bias due to the relative lack of data on other underrepresented groups, we believe that many of these issues affect many groups in similar ways. There is a strong need for more focused research on specific experiences going forward.
The struggle to achieve equal representation in STEM is often described as a leaky pipeline. Women and other minorities tend to be more well-represented at early stages of academic careers, but disproportionately drop out the further along you look. Multiple factors likely contribute to this leakiness. For example, we often assume that the traits needed to be successful in a field are embodied in those who have already succeeded. If you don’t see role models like yourself at the top, you might mistakenly assume that you cannot succeed. We will discuss two studies related to the visibility of female role models in academia and offer some ideas to increase representation at conferences and seminars.
Each stage of this pipeline typically involves a selection process, which is a minefield for issues of implicit and explicit bias. Many institutions are beginning to make efforts to reduce the effect of bias on selection processes, but the unjust effects of implicit biases are far from eliminated. As one example, we will discuss how female post-doctoral applicants are portrayed differently from male applicants in recommendation letters.
The social environment in STEM fields can also contribute to the leaky pipeline. We’ll dig into this idea by examining studies investigating how one’s sense of belonging in an environment can impact career decisions, confidence, and success. For example, STEM fields typically do not offer much support for family obligations, which can make it challenging for academics of both sexes to balance family and career. We’ll review several studies that indicate the negative impact of family obligations on scientific careers disproportionately affects women.
Inequality can place stumbling blocks in the day to day as well as the career progression of talented individuals. Even for those who work their entire lives to give themselves the best chance for the career they want, individual merits and hard work alone are too often not enough to overcome bias. Achieving our goal of a true meritocracy in science and beyond will require the hard work and clever problem-solving abilities of many people, and scientists can take a leading role in this mission.
But how can we combat bias when it is so widespread and deeply ingrained? Given that we all harbor implicit biases, we all must take responsibility for this fight. Acknowledging that the problem exists and understanding the data behind it is a good start, but in each post in this series we will also explore a few ideas for moving forward. We will examine the effectiveness of interventions at an individual, institutional, and policy-wide level, including a specific look at training programs for college students, institutional policy changes, and even the use of virtual reality to help us step into each other’s shoes.
We dedicate this series in memory of Dr. Ben Barres, a beloved member of our Stanford community who passed away this winter. Dr. Barres not only revolutionized the field of glial neuroscience, but also spent a substantial portion of his career combatting the many challenges faced by under-represented minorities in science. He was an incredible leader in this movement, broadly disseminating information about the problems at hand and working tirelessly to both offer and implement solutions. He has inspired us to emulate his approach. Our blog series will conclude with an article about Dr. Barres’ revolutionary work, exceptional mentorship, and unique approach to combating inequality.