Miranda Bogen is devising solutions to aid in governing AI
To provide AI-focused women academics and professionals with the recognition they deserve, TechCrunch is launching a series of interviews spotlighting remarkable women who have made significant contributions to the AI revolution. Throughout the year, we will feature several pieces highlighting key work that often goes unnoticed. You can find more profiles here.
Miranda Bogen serves as the founding director of the Center of Democracy and Technology’s AI Governance Lab, where she is dedicated to developing solutions for effectively regulating and governing AI systems. Her expertise extends to guiding responsible AI strategies at Meta, and she has previously served as a senior policy analyst at Uptown, an organization committed to leveraging technology to promote equity and justice.
Briefly, how did you get your start in AI? What attracted you to the field?
I was initially drawn to the field of machine learning and AI by witnessing how these technologies intersected with fundamental societal conversations — discussions about values, rights, and the communities that are often left marginalized. Through my early work exploring the intersection of AI and civil rights, I came to understand that AI systems are much more than mere technical artifacts; they are intricate systems that both influence and are influenced by their interactions with people, bureaucracies, and policies.
My ability to bridge technical and non-technical realms has always been a strength, and I found great excitement in the opportunity to dismantle the facade of technical complexity. I wanted to assist communities with varying expertise in shaping the development of AI from its foundation.
What work are you most proud of (in the AI field)?
When I initially entered this field, there was a significant need to persuade individuals that AI systems could indeed produce discriminatory effects on marginalized populations, let alone convince them of the necessity to address such harms. While there remains a considerable disparity between the current state and an ideal future where biases and other harmful effects are systematically addressed, I find satisfaction in the fact that the research conducted by my collaborators and me, particularly on discrimination in personalized online advertising, as well as my efforts within the industry on algorithmic fairness, have contributed to tangible changes.
Our work played a role in prompting meaningful adjustments to Meta’s ad delivery system and facilitated progress toward reducing disparities in access to critical economic opportunities. Though there is still much work to be done, I am encouraged by the strides we have made in acknowledging and addressing these issues.