George Mason University boosts AI expertise to study bias and ethics

In This Story

People Mentioned in This Story
Body

George Mason University continues to lead in responsible artificial intelligence, recently hiring two faculty who will work in the university’s cluster on AI, Social Justice, and Public Policy.

Thema Monroe-White, an associate professor of artificial intelligence and innovation policy in the Schar School of Policy and Government and the College of Engineering and Computing’s Department of Computer Science (CS), and Dasha Pruss, an assistant professor of Philosophy and Computer Science, will spend 25 percent of their time in the CS department.

Thema Monroe White
Monroe-White studies equity in data science and algorithmic bias. Photo provided. 

Monroe-White joins George Mason from Berry College in Rome, Georgia. Berry is a small liberal arts college, and coming from the consulting world, Monroe-White was eager to establish herself there when she joined in 2017. “The best thing about academe is certainly the autonomy. You know, when you start to think about what your research agenda is, you are trying to appease multiple voices, like your department chair, the tenure and promotion committee, the broader professional community,” she said.  “But ultimately, I felt coming from the non-profit sector I had a chance to identify what things were important and researchable that that would have some legs long-term. For me, that was equity and inclusion in the data science, AI and algorithmic bias.”

She joined George Mason earlier this year for numerous reasons. “I was looking for a position that recognized interdisciplinarity work, so having a joint appointment with computer science is a bonus. I want to be heard by computer scientists and those who understand and speak that language. I collaborate with engineers, I collaborate with computer scientists because the insights from my research really need to be heard, mostly, by the technicians.”

Dasha Press
Pruss's research focuses on carceral AI. Photo provided.

Pruss comes to George Mason after earning a PhD in history and philosophy of science from the University of Pittsburgh and doing a postdoc in the Embedded EthiCS program at Harvard. Her work focuses on what’s called carceral AI and technologies used in the criminal legal system and work on algorithms referred to as recidivism risk assessment instruments. “They're adopted as reforms, typically to make sentencing more consistent or to reduce bias in sentencing decisions,” she said. “But in the last few years there's been a lot of concern about these types of algorithms having racial biases baked into them and part of that is because the U.S. criminal legal system has structural inequities.”

She said thinking about the use of these tools can logically lead to thinking broadly about the purposes of punishment as being about predicting and controlling crime as opposed to other values such as rehabilitation. This led her to considering whether those types of changes in sentencing philosophy are occurring. “I wanted to tease out that assumption—are these tools being used in the way that we think they are? Are they actually having the positive impacts that the proponents of the reforms hope that they are?”  For what it's worth, Pruss says, unfortunately, the answers to these questions are no.