Interviewed by Larry Au on February 22, 2023 for the Spring 2023 SKAT Newsletter
Oliver Rollins is an Assistant Professor in the Department of American Ethnic Studies at the University of Washington. He is the author of Conviction: The Making and Unmaking of the Violent Brain (Stanford, 2021). He received his Ph.D. in Sociology from the University of California, San Francisco
Q: I read in the preface that you once considered entering medicine, and after completing your Bachelor’s, you went on to pursue a MA in Pan-African Studies, before completing your PhD in Sociology. Could you talk about how you first become interested in studying the neuroscience of violence?
A: Like a lot of us, I accidentally became a sociologist. I didn’t have plans to go do a PhD, and definitely not in Sociology. I started off in Biology. Toward the end of my undergraduate career, I had no desire to go to med school. Instead, I got interested in Black Studies and that’s what helped me think much more like a social scientist and raise questions about what I was studying in Biology. I ended up going to the University of Louisville, doing my MA in Pan-African Studies, and it just so happened that my mentors there were sociologists and so they were like, “Well, you know, if you want to really take these questions about race and health to a level, think about Sociology”. And so, I went to UCSF to work with Howard Pinderhughes to study violence as a health problem.
While I was at UCSF, I was also taking classes with Janet Shim and Adele Clark, which forced me to come back to this question of science. They were the first people to introduce me to the idea that you could raise questions about science, its relationship with the social, and the social implications of the making and dissemination of scientific knowledge. From that point, I ended up changing my research direction. I had already finished qualifying exams and was getting ready to work on a dissertation about violence prevention. I went to Howard and said, “I changed my mind,” and he was 100% supportive. Following in the footsteps of folks like Janet Shim and Troy Duster, I wanted to understand how neuroscientists thought about race in their work. Neuro-technologies were booming at that time, in the early to mid-2000s. I ended up interested in exploring the way neuroscientists studied violence with biomedical models, because I thought such research had to wrestling with the question of race; although that was not the case exactly. That was the original dissertation. It slightly changed with the book, however, where I placed emphasis not just on the complexities of race in the science, but the way this science tried to deal with continued critiques of the biology of violence, and especially how today’s neuroscientists of violence understand the “social” in the conceptualization, making, and potential application of the “violent brain” model.
Q: It’s always great to hear about the contingencies of academic careers. One of my favorite chapters in the book was your discussion of the “taboo of race”. As you write, there is a “fear of discussing race within public and professional spaces” (p. 107) amongst neuroscientists, which results in respondents telling you that they were “‘not looking too much into race’ … [which] materializes in studies of the violent brain as ignoring the impacts or race altogether, a race-neutral logic” (p. 108). How did you manage to get scientists to overcome this taboo and start talking about race? Could you also say something about the process of conducting this research and how you were able to overcome some of the hesitance of some neuroscientists to talk to a potentially critical sociologist?
A: I approached this research using multiple methods. I began with qualitative content analysis of peer-reviewed journal articles, book chapters, and books on neuroimaging of anti-social behavior and violence from the late 1980s up until around 2012. I also relied on some ethnography of conferences, particularly at conferences around neuroscience, violence, and the law. I did some training in neuroscience courses, which helped me think through how neuroimaging works. But the content analysis gave me a sense that race was not being talked about in contemporary research on neurobiology and violence. It was really hard to find any discussion about race beyond demographics, or this idea that they “controlled for race”. At first, I didn’t even know how I was going to do the dissertation. I remember, my advisor Howard saying, “That’s not actually a bad thing. It’s a good thing because now you need to try to figure out how they do this work on violence without actually thinking about race”. However, it was really difficult to get these scientists to agree to be interviewed. Two things happened when I tried to recruit scientists for my interviews. First, when I was reaching out, I wasn’t finding anyone who wanted to or would talk about race. This is a science that had already been plagued by these notions of racism, eugenics, and sexism from the early part of the 20th century, and many scientists were trying to leave that stain in the past. Second, I think me being a Black man played a role too. Most of the scientists I needed to recruit were white men, and I believe they were not too comfortable being asked questions about the ethics this science, and especially about race or racism in their work from me. Not necessarily because they don’t care, but because there are larger taboos around talking about race in this research program. Taboos about race, which I learned through the interviews that I did get, that inconspicuously structured how the research program is operates. I remember one neuroscientists telling me that “you’re not going to get a lot of people to talk to you”. I was just naming other scientists that I would like to talk more with, and the respondent said, “yeah, they’re not going to go on the record talking about these issues”.
It was less difficult to get them to open up once you got them in the room. No one had a problem, talking about race once the interview began. Other researchers who study this in genomics, like Janet Shim told me that, “people will talk about these things more openly than what you think”. They’re not necessarily hiding it, because this is not necessarily seen as problematic. Many of the researchers frame their work as a way to help historically marginalized and racialized communities because violence is thought to disproportionally impact these communities in a particular way. I think about this a lot today because in neuroscience—post-2020, post-George Floyd and Breonna Taylor—there’s been a change in Science; all of a sudden, scientists are talking more frank about systemic racism. I think if I conducted my book project today, I would likely have more neuroscientists willing to be interviewed. But at the time it really did make it difficult. As I wrote the book, I thought a lot about, or with, the work of Amade M’charek, particularly M’charek’s conversation regarding the “absent presence of race.” My argument is that race, and especially the racial past of the biology of violence, continue to haunt the science today; actively policing what can be discussed and how, and therefore shaping science without being formally recognized as such. More recently, I’ve been thinking about how to accurately and empirically capture the absent presence of race in scientific research. For SKAT researchers, this is going to be a key question: how to show the impacts or effects of race or racism, in scientific and biomedical settings, that seemingly operate as race neutral practices or when scientists acknowledge that race is “social” and has no biological meaning or purpose in their research.
Q: You invoke Troy Duster’s description of the “unenviable task”, or how “researchers must figure out how to effectively consider racialized experiences without ‘endowing race with a false sense of biological determinism’” (p. 121). How, in your view, should neuroscientists engage with social science in order to understand how racism and structural inequality shapes neuroscience? Are there any examples of anti-racist neuroscience that comes to mind?
A: How to do this in a more collaborative way, to get scientists to think about these things, is something we in SKAT should continue to work on. I’m thinking about this task in two new projects. First, I’m trying to focus more exclusively on elucidating the ways that race and scientific racism actually impact the making of neuroscience research by investigating how do neuroscientists study implicit racial bias. This is a different group of neuroscientists than those in my first project. They are more people of color and more women in this science. Their research thinks about the relationship between the brain, both how people recognize racial identity of the other and self, and how people mitigate their implicit racial biases. This science raises some interesting questions about the potential for neuroscientists to be anti-racist, and, at the same time, it also raises questions about the potential neuro-biologization of racism. Dorothy Roberts and I wrote a about some of these issues in an article in the Annual Review of Sociology. I’m also starting a new project to think through the relationship between social justice and neuroscience. Here, I am asking: What are the links between social justice and science? Are those things even compatible? Can neuroscience actually attend to social justice issues like anti-racism?
As for examples of anti-racist neuroscience, I would say if we following Ruha Benjamin and others, then we understand that some technologies operate under and help remake particular types of racial logics, but these racialized outcomes are not necessarily something that’s naturally endowed within the technology itself. Instead, the key here is the ways in which the values of society get normatively reconstituted through scientific practices. I do not think this is as simple as saying that researchers’ or developers’ implicit racial ideologies are being baked into these machines or algorithms. I think it’s a bit more complex. For example, what do we count as good neuroscience in the first place? How do existing, relied upon neuroscientific modalities, techniques, and guidelines for “good science” create and recreate the conditions for neuro-knowledges and technologies to readily support and supply avenues for racial thinking and racialized violence to be reconstituted in society?
I also think here about things like the Black in Neuro Movement, which came out right after George Floyd’s murder in the Summer of 2020. Here’s a group of young, mostly black neuroscientists in grad school, postdocs positions, or early career professorships, who began to coalesce together and think about ways in which they could change their research to address issues of scientific racism, both within their field and outside of it. Now they have mostly focused on increasing the number of Black neuroscientists and creating better support for those who are their now. But still, they can be seen as an example of how anti-racist logics are starting to be picked up in the neurosciences. Whether or not that can lead to something anti-racist in the future is an open question; it will certainly have to move beyond the politics of representation. In my third project, I want to focus on groups like Black in Neuro. However, I also want to historicize such social movement type activities. Thinking about science in the 1960s, I think about young researchers and scientists who were in grad school in the midst of the Civil Rights Movement, the Anti-War Movement, and other types of movements which were affecting their politics, and potentially their wants to produce a particular type of “socially just” science. Yet, when look at the 1980s and 1990s, we didn’t necessarily see a dramatic shift towards a democratic or social justice oriented new science. I want this third project to help outline and interrogate some of the social and structural barriers that have and continue to stand in the way of any movement towards anti-racism; in order to help outline ways in which scientists like those in Black in Neuro may join us in the struggle against racism and other systemic social inequities. Anti-racism is a constant struggle. Translating that into a technology or a science is going to be hard. There is no finish line where you get to say, “look, this is an anti-racist neuroimaging machine”. However, we must start to think about the steps toward an anti-racist science and the technologies and practices that may help move us toward this goal.
Q: You discuss how advances in neuro-imaging, measurement, and visualization, such as from CT scans to the advent of sMRI and fMRI, have enabled neuroscientists to claim more accuracy in pinpointing the neuro-biological roots of violence while drawing on the cultural authority of these “objective” images. How much of this is a story of technology and the promise of ever-improving precision in the tools? How are these technological developments tied to broader social concerns and anxieties?
A: Two things come to mind. First, from talking with Troy Duster, we should be wary of following the newness of technology and not thinking about the continual ontological commitments within science. You can have new technologies all you want, but what he forced me to think about is what are the ontological commitments in the sciences, and what are they doing in the first place. One of them that I explore in this book is the idea that: “We can separate criminals from non-criminals within society”. That is a particular type of ontology that this science relies on to conceptualize and properly actualize this research on violence. To be fair, I think this commitment applies to much of Criminology too; some of that science also buys into this idea that violent and non-violent people are two uniquely distinctive groups. Second, I think about what Du Bois meant by “progress”. In Sociology, we read Marx in many ways, often going well beyond thinking about capitalism or well beyond thinking about economic markets; people have Marxist critiques to think about all kinds of power dynamics within society. My use of Du Boisian framing of progress in the book’s preface is sort of my way of saying that we could, or should, do something similar with Du Bois. Racial theory isn’t only a niche thing; it can be used to interrogate more generally the power dynamics, structure, and organization of society.
For example, I use Du Bois’s critically analysis of progress to think about similarly ways that US society frames the ideas of racial (or social) progress and scientific progress: The idea that science is constantly moving towards something good or improved with new technology—that moving the scientific knowledge forward means the field is always advancing (for the better), seems to be very similarly to the idea that the further we are away from the period of enslavement or Jim Crow, the better or more advanced our society has come on the question of race. Du Bois didn’t buy this accommodating narrative about progress in the early 20th century, and I don’t think many of us buy it today. I would say my critique of the neuroscience of violence is very much tied to questions about the technological precision and prowess, yet such advances or progress have never fully quelled the concerns that the public has about biological research on the roots of violence. And, it’s pretty clear that the neuroscience of violence has not done a better job of addressing these concerns or anxieties. Using technologies like neuroimaging to compute and predict one’s risk for antisocial behavior or violence has failed to capture certain, vital, risk factors, particularly complex embedded inequalities. So, these new biosocial models of risk may do a good or better job of recognizing the entwined relationships between social and biological variables, but systemic inequality, like systemic racism, is a social factor that is not easily quantified or controllable in these models. Complex social practices, then, are omitted from these risk models. Neuroscientists that I spoke with know that this is a problem, but they don’t know how to capture or calculate these forces in a neurobiological model. But, we can ask, and should ask then, who are these models of risk for then. These neuro-technologies are supposed to capture whether or not someone is a criminal and their “risk”, or risk scores. How do you predict that without dealing with systemic inequalities—it doesn’t make any sense! How can we talk about whether or not a young Black kid in an inner-city neighborhood is going to be violent or not by just looking at their brains, without capturing systemic inequalities such as racism or capitalism?
Neuroscientists that I talked with would say that crime or violence are social products, here making a distinction between criminality and “antisocial behavior”—as a neuropsychological disorder. Yet, when neuroscientists rethink violence through a biomedical lens they reduce its meaning through a brain-logic—particularly, they focus on two areas of the brain: the amygdala, because violence is reduced to lack of emotional control, and the prefrontal cortex because violence is seen as a lack of impulse control or bad decision-making. They limit violence, then, to emotional control, impulsiveness and decision-making, reframing it as a biomedical disease. However, this uncritically limits the kinds of violence and people that matter for them in their research, and implicitly is remaking what violence means. This is another way in which these biosocial models can fall short of fully comprehended the dynamic role of the social. And so, we should talk about what the technologies do and the spaces their use opens up to litigate and maybe transform our existing social knowledges, for good or bad. We have to keep sight of the social-ness of the technology itself, then, keeping in mind that certain types of social factors, arguments, politics, and ideologies, are often easily grasp upon by both scientists and the public to help rationalize these neuroscientific claims about violence.
Q: You also describe the underlying promissory undertones of neuroscientific research into violence, and the therapeutic promise of “fixing” the “violent brain”. What responsibilities do neuroscientists have when it comes to the implications and interpretations of their work?
A: This gets at the intersections of Sociology and Bioethics. First, we should ask, “what are scientists constructing as the problem, and do we agree with this scientific framing ?” This question is really getting at, what type of solutions are possible from the way science has approached and framed the societal problem that they view as in need of better knowledge or technological fixing. For example, I don’t buy the idea that we can use neuroimaging to scan the brains of particular people in order to alleviate crime and violence within society. Even though today’s neuroscientists are not necessarily advocating for brain-based interventions for crime, like Duster, I still think this is a dangerous idea to reintroduce today. In part because violence and crime are social products that are complexly entangled in our society’s unequal racial, gendered and class politics—social forces that this science has already admitted are not part of the neurobiological risk prediction models they design.
Probably the most interesting thing I found in relation to the “therapeutic promise” is that most of the proposed interventions are things that social scientists have been saying for decades. Like improving childhood nutrition or eliminating poverty, the twist here is that these scientists all say this is necessary to build better brains, that will then decrease one’s risk for a biomedical disorder and therefore their risk for violence. This is also why many of these neuroscientists have limited their empirical focus to so-called psychopaths or people who present some features of anti-social behavior, and are trying to figure out what’s the best way to help those people. That is also at the heart of a big tension in this science: Some researchers seem to see this as a science about the risk for biomedical or DSM-defined disorders, which increases the risk that one my engage in violence or criminal behavior. Others, however, see the neuroscience of violence as a way to improve our understanding of crime in general. This important debate within this science has not been resolved. The bottom line, however, is that when scientists talk about the neuroscience of violence most are talking about imaging “people who have psychopathic tendencies”, but the public will likely interpret this science and its interventions (social or biological) as one about crime and criminals in general, and will likely always read race and other social practices of power and inequality into these scientific findings or uses.
Scientists have a responsibility of talking about what their science can and cannot do. There’s been more work around how these imaging processes are coming into the courts at different stages, and there’s a responsibility of neuroscientists to talk about it—and many of them do! The idea of showing brainscans, to “see” where a brain is problematic, seems to have way more power on jurors and judges. One of my respondents says, “unfortunately the only way we can convince people that this is a problem is we have to show pictures to judges”. For me, this means we must also pay close attention to how scientists are engaging with their work once its translated into new social worlds. This will help understand how this politic of responsibility that’s being adopted by some of these scientists actually looks in practice.
There’s another question that you have in there too, about thinking about neuroethics as a sociologist, I think it’s interesting because it raises the question of the types of problems that ethics can’t answer. Ethics may not be the right framing, and I want to raise questions as to whether ethics can properly address issues of social justice. If social justice or antiracism is only framed through a question of ethics, such conversations may not include larger sociological structures. Ethics may reproduce a particular type of normativity in science. An argument I have in the book is that we’re not talking about racist scientists, and we’re not necessarily talking about a very reductionist or deterministic science. But part of the bigger problem is that it’s very normative science. That it can’t actually be used within a criminal justice system and then deal with the systemic racism and inequalities within the justice system. All it’s really doing is making that system more efficient. For me that goes beyond a moral conflict concerning if we can or should conduct this science better. It’s a question of how and where society aims to pick up and apply these knowledges and technologies. This is something I think neuroscientists should be able to speak to. We should hold them accountable to talk about how they think their work will be taken up outside of the lab. Encourage them to speak to if and how these neuro-technologies and knowledges manage the impacts of existing power dynamics that help shape the functional purpose of certain social worlds and institutions. Maybe then this will help scientists think more about the questions they’re asking in the first place. If you can’t actually deal with those things, should we even be asking questions around the brain and violence? Are there better uses of neuroimaging within society, uses that deal with things besides these questions around violence?
Q: As someone outside of this field but somewhat plugged into broader debates about the ethics of biomedical innovation, I see a lot about discussions about neuroethics around things like Neuralink and neural implants. As you write in the end of the book, one of the “ultimate goal of neuroimaging research” is neuroprediction or “brain-reading” (p. 137). What do you anticipate to be some of the ethical, social, and political issues that arise with further advances in this area of research? What should SKAT scholars pay attention to?
This is actually something I’m trying to work through in an essay now. The book was written two years ago. With the idea of prediction, it raises a question: Why are we predicting certain things in the first place? What are we actually using this prediction for? To me, prediction, the way in which someone is placed “at risk” often entwined with the way we construct our visions or meanings of historically marginalized communities. When we calculate “someone’s at risk for violence”, aren’t we inviting society to start treating them as if they’re already a criminal? And so, we have to think about whether or not prediction is actually even the right logic to be thinking about health or social behaviors. Are there better ways in which we can think about prevention of violence or crime? Prevention, perhaps, beyond wanting to predict who will and who will not be a criminal in the future.
Moreover, I want to recognizes how these scientific technologies about violence, or any behavior, shape our larger democratic understandings of safety and normality. There’s something about the way in which the idea of the “violent brain” shapes what we consider to be a “normal person”, the other side of criminality. Neuroscientists, as they do this work, have a very narrow understanding of whatever normal is: Not having any psychological diseases, not having any head lesions, etc. Who actually fits into this idea of normal? With the BRAIN Initiative, there will be a lot more technologies to come, and there’s a lot more questions that sociologists need to raise about their potential use in society.
We should also think about what actually counts as risk factors in some of these models, and what doesn’t get measured as risk factors. Racism is not being measured as a particular type of risk factor in the neuroscience of violence. This is really important for the new BRAIN Initiative where the whole focus is to bring about new technologies. Many more scientists are trying to think about social context in a way. That still raises some questions around “what do they mean by social context”. When we hear about the social environment, one of the things that we have to question: What actually are you counting as the environment or as the social? I’m again thinking with the idea around the “absent presence” and how do you capture these larger social structures and mechanisms that we, as sociologists say, absolutely, impact our behaviors. What are the limits of these technologies, and can they actually capture this? Where does social theory impact this? In an article, we wrote about how neuropsychology currently thinks about race and neuroscience. One of the recommendations was to just ask neuroscientists: How do you think about social theory? There’s a lot of theory out there. How do you think about these questions of race, and where does that come into your work?
The other thing for those of us in STS, as much as we talk about imaging machines as technologies, I’m convinced that the brain itself is the technology that’s being created here. The neuroscience of violence is not just about scanning a brain to detect the biological roots of crime. Really, it’s a research program based on the idea that one can filter or test a bunch of ideas through special access to the brain and the brain, once opened up, or made readable, is going to spit out hidden or missing facts that will complete or fundamentally change our (limited) understanding of violence. If we think about the way in which scientists think the brain is this meeting place between the biological and the social, then it makes sense then that they’re saying that certain things are going to be explained if we just know more about the interworking of the brain. The brain is being continually worked up, technically assembled into an empirical construct that can show how anatomical, cognitive, emotional characters of violence, and a material site to work upon, predict, pursue, and fix our deviant thoughts and unhealthy behaviors.