Symposium: Generative AI, Free Speech & Public Discourse

Columbia Engineering and the Knight First Amendment Institute at Columbia are co-hosting panels with multidisciplinary experts to debate artificial intelligence and its impact on the future of public discourse, free expression, and democracy.

Text reads "Symposium. Generative AI, Free Speech & Public Discourse. Tuesday, February 20, 2024, The Forum at Columbia University, 601 W 125th St, New York, NY 10027". In the background, countless silhouettes of human heads scatter the image. Heads connect to eachother by straight lines

About this Event

Generative AI tools like ChatGPT and Dall-E can aid society in a number of ways, but could also bring a fresh deluge of disinformation, threaten free elections, and destabilize democracies. As these tools become widely available, cheaper, and more powerful, how can self-governing societies limit the potential of generative AI to shape — or distort — public discourse? How might we harness technology to address some of the feared harms resulting from the use of generative AI? Which harms require political solutions? What goals should guide our work over the next decade?

This event will feature several keynote speakers and panel discussions with experts from law, computer science, social science, history, and other disciplines. Our guest speakers will discuss topics ranging from AI and human creativity, information integrity and power and governance.

Co-sponsored by Columbia Engineering and the Knight First Amendment Institute at Columbia

Schedule

Welcome Remarks

Speaker: Shih-Fu Chang (Dean, Columbia Engineering), Jameel Jaffer (Executive Director, Knight First Amendment Institute)
Time: 11:00 AM – 11:15 AM
Location: Forum Auditorium

Keynote 1A: Opening up the language model black box

Speaker: Tatsu Hashimoto, Stanford University
Time: 11:15 AM – 11:45 AM
Location: Forum Auditorium

Keynote 1B: Challenges for Conversational AI in the Era of LLM

Speaker: Dilek Hakkani Tur, University of Illinois, Urbana-Champaign
Time: 11:45 AM – 12:15 AM
Location: Forum Auditorium

Lunch

Time: 12:15 PM
Location: West Atrium

Panel 1: Empirical and Technological Questions: Current Landscape, Challenges, and Opportunities

Moderator: Shih-Fu Chang, Dean, Columbia Engineering
Panelists: Kathy McKeown (Columbia Engineering), Alex Jaimes (Dataminr), Carl Vondrick (Columbia Engineering), Smaranda Muresan (Barnard College), Arvind Narayanan (Princeton University)
Time: 1:00 PM – 2:15 PM
Location: Forum Auditorium

Break

Time: 2:15 PM – 2:30 PM
Location: West Atrium

Seed Funding Presentations

Intro: Alberto Ibargüen, Katy Glenn Bass, Samar Kaukab
Presenters: Kathy McKeown, Lena Song, Carl Vondrick, Xia Zhou
Time: 2:30 PM – 3:00 PM
Location: Forum Auditorium

Keynote 2: AI and Trust

Speaker: Bruce Schneier, Harvard Kennedy School
Time: 3:00 PM – 3:30 PM
Location: Forum Auditorium

Break

Time: 3:30 PM – 3:45 PM
Location: West Atrium

Panel 2: Legal and Philosophical Questions: Information Integrity, Trustworthiness, and the First Amendment

Moderator: Katy Glenn Bass, Research Director, Knight First Amendment Institute
Panelists: Mike Ananny (University of Southern California), James Grimmelmann (Cornell Law School), Camille Francois (Columbia School of International and Public Affairs), Nadine Farid Johnson (Knight First Amendment Institute)
Time: 3:45 PM – 5:00 PM
Location: Forum Auditorium

Closing Remarks

Speaker: Shih-Fu Chang and Jameel Jaffer
Time: 5:00 PM – 5:15 PM
Location: Forum Auditorium

Reception

Time: 5:15 PM – 7:00 PM
Location: West Atrium

Keynote Presentation Abstracts

Opening up the language model black box

Tatsunori B. Hashimoto, Ph.D., Assistant Professor, Stanford University

Advances in large language models have brought about exciting advancements in capabilities, but the commercialization of this technology has led to an increasing loss of transparency. State-of-the-art language models effectively operate as black boxes, with many things unknown about their training algorithms, data annotators, and pertaining data. Reasoned public discourse about language models requires a deeper understanding of how these systems are constructed, and this talk will mention several approaches — including open-source models and benchmarking, watermarking, and membership inference — for gaining important insights into the behavior of these language models.

Challenges for Conversational AI in the Era of LLMs

Dilek Hakkani-Tür, PhD, Professor of Computer Science, University of Illinois Urbana-Champaign

Recent large language models (LLMs) have enabled significant advancements for open-domain dialogue systems due to their ability to generate coherent natural language responses to many user requests. However, these models suffer from limitations, such as, hallucination, undesired capturing of biases, difficulty in generalization to specific policies, and lack of interpretability. To tackle these issues, the natural language processing community proposed methods, such as, injecting knowledge into language models during training or inference, retrieving related knowledge using multi-step inference and API/tools, and so on. In this talk, I plan to provide a brief overview of our and others’ work that aim to address these challenges.

AI and Trust

Bruce Schneier, Lecturer, Harvard Kennedy School

For AI to be trusted, it must be trustworthy. This won't happen with the current market incentives. If we are ever to realize the full potential for generative AI in public discourse, we need something different.

Seed Projects

These projects are supported by seed funds from Columbia Engineering and the Knight First Amendment Institute at Columbia University:

  • Protecting the Integrity of Live Speech Videos with Modulated Ambient Light | Xia Zhou (Computer Science)
  • Enabling Unbiased Summarization of Opinions from Vulnerable Groups | Kathy McKeown (Computer Science)
  • Making Public Law: Artificial Intelligence for Legal Accessibility and Judicial Legitimacy | Lena Song (University of Illinois Urbana-Champaign, SSRC Digital Platforms Initiative)
  • Detecting AI-Generated Content via Rewriting | Carl Vondrick (Computer Science) (on behalf of Junfeng Yang)

Speakers


Mike Ananny
University of Southern California

Katy Glenn Bass
Knight First Amendment Institute

Shih-Fu Chang
Columbia Engineering

Camille François
Columbia University School of International and Public Affairs

James Grimmelmann
Cornell Tech and Cornell Law School

Dilek Hakkani-Tür
University of Illinois Urbana-Champaign

Tatsunori B. Hashimoto
Stanford University

Alberto Ibargüen
Knight Foundation (2003-2023)

Garud Iyengar
Columbia Engineering

Jameel Jaffer
Knight First Amendment Institute

Alex Jaimes
Dataminr

Nadine Farid Johnson
Knight First Amendment Institute

Samar Kaukab
Columbia Engineering

Kathy McKeown
Columbia Engineering, Data Science Institute

Smaranda Muresan
Columbia Engineering, Data Science Institute

Arvind Narayanan
Princeton University

Bruce Schneier
Harvard University

Lena Song
University of Illinois Urbana-Champaign, SSRC Digital Platforms Initiative

Carl Vondrick
Columbia Engineering

Xia Zhou
Columbia Engineering

Stay up-to-date with the Columbia Engineering newsletter

* indicates required
Knight First Amendment Institute at Columbia University

Event Details
Date: Tuesday, February 20, 2024
Time: 11:00 AM – 5:15 PM
Location: The Forum, 601 W 125th St, New York, NY 10027