AI, tech and social justice: Meet EngSci alumna and U of T Groundbreaker Deborah Raji
The interview with Deborah Raji begins at 4:44 min in this episode of Groundbreakers.
How can AI and related technologies avoid perpetuating racism and gender bias?
The latest episode of U of T’s Groundbreakers video series hosted by Ainka Jess features an interview with an EngSci alumna who has made foundational contributions to this question.
Deborah Raji (1T9), a member of U of T’s Black Research Network discusses how bias in AI algorithms can perpetuate racism and gender bias and erode civil rights. The research she began as an undergraduate focuses on how we can avoid this trap and how access to technology can further inclusive excellence.
Recent EngSci graduate Inioluwa Deborah Raji (1T9) is among the leading innovators on the Forbes 30 Under 30 2021 list. She was recognized in the category of Enterprise Technology for her impactful research on racial and gender bias in AI, and for holding to account companies that use biased technology.
Her work, which she began while still an undergraduate student, has made international headlines and has already helped set new for accountability standards within the AI industry.
Inioluwa Deborah Raji (EngSci 1T9) named to MIT Technology Review’s Top Innovators Under 35
EngSci alumna Deborah Raji (1T9) has investigated racial and gender bias in facial recognition services. (Photo courtesy of Deborah Raji)
EngSci alumna Inioluwa Deborah Raji (EngSci 1T9) has been named to this year’s list of Top Innovators Under 35 by MIT Technology Review, an impressive achievement for such a recent graduate.
Raji was recognized for her impactful research on racial and gender bias in facial recognition services, such as those used by law enforcement agencies. Her work, which she began while still an undergraduate student, made international headlines and has already helped set standards for accountability within the AI industry.
Holding companies accountable for biased AI – meet Year 4 student Deb Raji
Deb Raji (Year 4 EngSci + PEY) and researchers at the MIT Media Lab identified a need for stronger evaluation practices to mitigate gender and racial biases of AI products. (Credit: Liz Do)
As artificial intelligence (AI) software becomes more widely used, questions have arisen about how social biases may inadvertently be amplified through it. One area of concern is facial detection and recognition software. Biases in the data sets used to ‘train’ AI software may lead to racial biases in the end products. Since these are sometimes used in law enforcement, this raises civil rights concerns.
Year 4 EngSci student Deb Raji (1T8 PEY) and collaborators at the Massachusetts Institute of Technology (MIT) recently won “best student paper” at the Artificial Intelligence, Ethics, and Society (AIES) Conference in Honolulu, Hawaii, for identifying performance disparities in commonly used facial detection software when used on groups of different genders and skin tones. Using Amazon’s Rekognition software, they found that darker-skinned women were misidentified as men in nearly one-third of cases.
Raji hopes that this work will show companies how to rigorously audit their algorithms to uncover hidden biases. “Deb Raji’s work highlights the critical need to place engineering work within a social context,” says Professor Deepa Kundur, Chair of the Division of Engineering Science. “We’re very proud of Deb’s achievements and look forward to her future contributions to the field.”