Episodes
Wednesday Oct 14, 2020
Why We Do This: Reflecting on Six Months of Radical AI with Dylan and Jess
Wednesday Oct 14, 2020
Wednesday Oct 14, 2020
In this special episode of The Radical AI Podcast Dylan and Jess pull back the curtain to reflect on six months of the show! From qualitative research to ontological horseplay - this episode has it all!
Full show notes for this episode can be found at radicalai.org.
If you enjoy this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at twitter.com/radicalaipod
Wednesday Oct 07, 2020
Wednesday Oct 07, 2020
What is media integrity? What is media manipulation? What do you need to know about fake news?
To answer these questions and more we welcome to the show Claire Leibowicz and Emily Saltz -- two representatives from the Partnership on AI’s AI and Media Integrity team.
Claire Leibowicz is a Program Lead directing the strategy and execution of projects in the Partnership on AI’s AI and Media Integrity portfolio. Claire also oversees PAI’s AI and Media Integrity Steering Committee.
Emily Saltz is a Research Fellow at Partnership on AI for the PAI/First Draft Media Manipulation Research Fellowship. Prior to joining PAI, Emily was UX Lead for The News Provenance Project at The New York Times.
Full show notes for this episode can be found at Radicalai.org.
If you enjoy this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at twitter.com/radicalaipod
Wednesday Sep 30, 2020
The State of the Union of Surveillance: Are Things Getting Better? with Liz O'Sullivan
Wednesday Sep 30, 2020
Wednesday Sep 30, 2020
What should you know about the state of surveillance in the world today? What can we do as consumers to stop unintentionally contributing to surveillance? The Facial Recognition industry had a reckoning after the murder of George Floyd - are things getting better?
To answer these questions we welcome Liz O'Sullivan to the show.
Liz O'Sullivan is the Surveillance Technology Oversight Project's technology director. She is also the co-founder and vice president of commercial operations at Arthur AI, an AI explainability and bias monitoring startup. Liz has been featured in articles on ethical AI in the NY Times, The Intercept, and The Register, and has written about AI for the ACLU and The Campaign to Stop Killer Robots. She has spent 10 years in tech, mainly in the AI space, most recently as the head of image annotations for the computer vision startup, Clarifai. Full show notes for this episode can be found at Radicalai.org.
If you enjoy this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at twitter.com/radicalaipod
Wednesday Sep 23, 2020
Wednesday Sep 23, 2020
What are the limitations of using checklists for fairness? What are the alternatives? How do we effectively design ethical AI systems around our collective values?
To answer these questions we welcome Dr. Michael Madaio to the show.
Madaio is a postdoc at Microsoft Research working with the FATE (Fairness, Accountability, Transparency, and Ethics in AI) research group. Michael works at the intersection of human-computer interaction, AI/ML, and public interest technology, where he uses human-centered methods to understand how we might equitably co-design data-driven technologies in the public interest with impacted stakeholders.
Michael, along with other collaborators at Microsoft FATE, authored the paper: “Co-Designing Checklists to Understand Organizational Challenges and Opportunities around Fairness in AI”, which is one of the major focuses of this interview!
Full show notes for this episode can be found at Radicalai.org.
If you enjoy this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at twitter.com/radicalaipod
Wednesday Sep 16, 2020
Wednesday Sep 16, 2020
What is the tech to prison pipeline? How can we build infrastructures of resistance to it? What role does academia play in perpetuating carceral technology?
To answer these questions we welcome to the show Sonja Solomun and Audrey Beard, two representatives from the Coalition for Critical Technology.
Sonja Solomun works on the politics of media and technology, including the history of digital platforms, polarization, and on fair and accountable governance of technology. She is currently the Research Director of the Centre for Media, Technology and Democracy at McGill’s Max Bell School of Public Policy is finishing her PhD at the Department of Communication Studies at McGill University.
Audrey Beard is a critical AI researcher who explores the politics of artificial intelligence systems and who earned their Master's in Computer Science at Rensselaer Polytechnic Institute.
Audrey and Sonja co-founded the Coalition for Critical Technology, along with NM Amadeo, Chelsea Barabas, Theo Dryer, and Beth Semel. The mission of the Coalition for Critical Technology is to work towards justice by resisting technologies that exacerbate inequality, reinforce racism, and support the carceral state.
Full show notes for this episode can be found at Radicalai.org.
If you enjoy this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at twitter.com/radicalaipod
Sunday Sep 13, 2020
Sunday Sep 13, 2020
How can we inform and inspire the next generation of responsible technologists and changemakers? How do you get involved as someone new to the responsible AI field?
In partnership with All Tech is Human we present this Livestreamed conversation featuring Rumman Chowdhury (Responsible AI Lead at Accenture) and Yoav Schlesinger (Principal, Ethical AI Practice at Salesforce).
This conversation is moderated by All Tech Is Human's David Ryan Polgar. The organizational partner for the event is TheBridge.
The conversation does not stop here! For each of the episodes in our series with All Tech is Human, you can find a detailed “continue the conversation” page on our website radicalai.org. For each episode we will include all of the action items we just debriefed as well as annotated resources that were mentioned by the guest speakers during the livestream, ways to get involved, relevant podcast episodes, books, and other publications.
Wednesday Sep 09, 2020
Democratizing AI: Inclusivity, Accountability, & Collaboration with Anima Anandkumar
Wednesday Sep 09, 2020
Wednesday Sep 09, 2020
What are current attitudes towards AI Ethics from within the tech industry? How can we make computer science a more inclusive discipline for women? What does it mean to democratize AI? Why should we? How can we?
To answer these questions and more we welcome Dr. Anima Anandkumar to the show.
Anima holds dual positions in academia and industry. In academia - she is a professor in the Caltech Computing and Mathematical Sciences department. In Industry - she is the director of machine learning research at NVIDIA. At NVIDIA, she is leading the research group that develops next-generation AI algorithms. Anima is also the youngest named chair professor at Caltech, where she co-leads the AI4science initiative.
Full show notes for this episode can be found at Radicalai.org.
If you enjoy this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at twitter.com/radicalaipod
Wednesday Sep 02, 2020
Designing for Intelligibility: Building Responsible AI with Jenn Wortman Vaughan
Wednesday Sep 02, 2020
Wednesday Sep 02, 2020
What are the differences between explainability, intelligibility, interpretability, and transparency in Responsible AI? What is human-centered machine learning? Should we be regulating machine learning transparency?
To answer these questions and more we welcome Dr. Jenn Wortman Vaughan to the show.
Jenn is a Senior Principal Researcher at Microsoft Research. She has been leading efforts at Microsoft around transparency, intelligibility, and explanation under the umbrella of Aether, their company-wide initiative focused on responsible AI. Jenn’s research focuses broadly on the interaction between people and AI, with a passion for AI that augments, rather than replaces, human abilities.
Full show notes for this episode can be found at Radicalai.org.
If you enjoy this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at twitter.com/radicalaipod
Wednesday Aug 26, 2020
Wednesday Aug 26, 2020
How should diplomacy and international cooperation adjust to the significant global power that major tech companies wield?
In partnership with All Tech is Human we present this Livestreamed conversation featuring Alexis Wichowski (adjunct associate professor in Columbia University’s School of International and Public Affairs, teaching in the Technology, Media, and Communications specialization) and Rana Sarkar (Consul General of Canada for San Francisco and Silicon Valley, with accreditation for Northern California and Hawaii.)
This conversation is moderated by All Tech Is Human's David Ryan Polgar. The organizational partner for the event is TheBridge.
The conversation does not stop here! For each of the episodes in our series with All Tech is Human, you can find a detailed “continue the conversation” page on our website radicalai.org. For each episode we will include all of the action items we just debriefed as well as annotated resources that were mentioned by the guest speakers during the livestream, ways to get involved, relevant podcast episodes, books, and other publications.
Wednesday Aug 19, 2020
Is Uber Moral? The Ethical Crisis of the Gig Economy with Veena Dubal
Wednesday Aug 19, 2020
Wednesday Aug 19, 2020
What is precarious work and how does it impact the psychology of labor? How might platforms like Uber and Lyft be negatively impacting their workers? How do gig economy apps control the lives of those who use them for work?
To answer these questions and more we welcome Dr. Veena Dubal to the show.
Veena is a professor of Law at UC Hastings. Veena received her J.D. and PhD from UC Berkeley, where she conducted an ethnography of the San Francisco taxi industry. Veena’s research focuses on the intersection of law, technology, and precarious work.
Full show notes for this episode can be found at Radicalai.org.
If you enjoy this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at twitter.com/radicalaipod