AINow Symposium Recap: Addressing the Social Implications of AI

| Briana Vecchione, Microsoft NY Civic Tech Fellow

As artificial intelligence gains exposure in media and public discourse, so too does the demand for spaces focused on studying its systems and their ramifications. Last weekend, one such space was brought to the ground through AINow, a research initiative (and soon-to-be NY-based research center!) Co-Founded by Kate Crawford and Meredith Whittaker and dedicated to addressing the social implications of machine learning and artificial intelligence. This year, the symposium was hosted at the an equally forward-thinking MIT Media Lab. Attendance alone wasn’t sufficient, each guest came with the instruction to think about the proposed prompt: “What issue does this community most need to address within the next 12 months”?

Discussion was curated through the organization of an Experts Workshop, an “invite-only, interdisciplinary convening of top practitioners and researchers on the near-term social and economic implications of artificial intelligence.” Attendees presented brief flash talks around four previously-defined focus areas: Bias and inclusion, labor and automation, rights and liberties, and ethics and governance. Quickly, conversation shifted towards the need to recognize the uneven distribution of power when designing AI systems and design for full-spectrum community inclusion. Discrepancies within standardization were discussed as well, including the need to define malpractice and calls to better understand the processes involved in measuring, collecting, and sampling data. Researchers examined the notions of biased training data, disparity in accuracy rates, false reinforcement bias, cumulative disadvantage of background predictions, and need to create mechanisms that correct for historical injustice. Actionable suggestions for reshaping governance and countering economic displacement were debated.

A group discussion followed, organized to stimulate catalyzing questions and comments around AI’s interactivity with research, industry, and activism. Some of the resulting statements are as follows:

  • What does it look like to build an algorithmically-mediated public space?
  • How do we democratize the AI space?
  • We need to establish more transparency around defined goals & penalty for errors
  • How can we increase social knowledge around AI nationally & trans-nationally?
  • We need to move from power thinking to design thinking, as well as from what is to be done to what is already happening
  • We need to address the community segregation: If we want AI for the world, the world needs to be part of the conversation
  • We need for more discussion around different definitions of bias
  • How is law channeling AI in the US and how do we create important accountability?
  • There’s no silver bullet or perfect fairness — we need to make things more fair and equal. We should be looking at who is least included, not designing for the most included. We need to normalize admissions for self-guilt

In the evening, the space opened up for the general public symposium, which hosted three panels around Bias Traps in AI, Governance Gaps, and Rights and Liberties in an Automated World. A lot of the above topics were discussed at large, each with Q&A from participants.

AI isn’t new (It’s existed since the 50’s), but recent sensationalism around its discussion validates its ever-increasing prominence in public life. Though the AINow community is new and developing, it is strongly backed. Organizations like the Ethics and Governance of Artificial Intelligence Fund were recently founded to support the humanities, social sciences, and other disciplines in the development of AI. See here to view the Experts Workshop, here to view the public symposium, or here to subscribe to AINow’s updates.

Tags: , , , , , , ,