Research

AI in Elections: How Should Society — and Engineers — Respond?

Two computer scientists at Columbia Engineering share their thoughts on how to ensure that emerging technologies benefit humanity.

November 01, 2024
Grant Currin

As the election season draws to a close, scholars from across Columbia University recently gathered to discuss how emerging AI systems are impacting our democracy and what should be done to counter their negative effects. 

In a wide-ranging conversation, the discussants drew parallels to the history of technology and innovation as they grappled with some of the most difficult problems facing political systems around the globe.

Associate Professor Eugene Wu and Assistant Professor Brian Smith (both faculty in Columbia Engineering’s Computer Science Department) joined historian Alma Steingart and political scientist Yamil R. Velez at the Oct. 22 panel discussion, Awakening Our Democracy: AI in the Ballot Box. Journalist Dhrumil Mehta served as moderator. The event was hosted by University Life and co-sponsored by the Data Science Institute, the Office of the Vice Provost for Faculty Advancement, and the Columbia Journalism School. 

Smith and Wu are members of the Data Science Institute, with Wu serving as co-chair of the Institute’s Data, Media and Society Center. Smith is a member of that center and of the Center for Smart Streetscapes. 

We caught up with Smith and Wu after the event to discuss this crucial topic.

Q: Why are recent advancements in AI making people so concerned about the election and political life?

Brian Smith: When we’re thinking about how to deal with the consequences of new technologies, it can be useful to consider how society dealt with the consequences of technologies that have now become common. 

For example, we had zero traffic crashes before the invention of the automobile. New York City didn’t have traffic lights, and streets operated the way sidewalks do today, so you can just walk or ride your horse anywhere. But then came cars, and suddenly people had the ability to move really, really fast.

Similarly, AI lets people create fake text, images, and videos at high speed. So, my question is: What are the traffic laws and rules we need to create around this to develop a safe infrastructure? We already have libel laws and slander laws, right? If it becomes a problem that people are making up stories and creating false images and videos, I think that we as a society can lean into some of the same strategies for enforcing rules that prevent that. 

Q: So, you’re suggesting that regulation, not just technology, is essential in addressing these challenges?

Smith: Exactly. Technology alone can’t be the solution because you can always create a version without built-in protections or guardrails. We need laws and policies in place. I think people will become more open to having laws and policies against creating a lot of this stuff as more people run into fake content and realize they got duped.

Eugene Wu: I agree that it’s important for everyone to use their voice and exercise their political power — but that’s not sufficient. Institutions evolve at a much slower rate than technology, and the gap between the two is widening. That means we need technological guardrails in addition to social and political stopgaps in jurisdictions across the world. 

One analogy is nuclear technologies. They don’t rely on human operators being perfect. If an operator messes up or there’s an earthquake, there are multiple layers of fail-safes to prevent the worst-case scenario. That wasn’t always the case. Take the case of Therac-25, a radiation machine that emitted hundreds of times more radiation than is safe for years because it didn’t have these fail-safes. Now, similar technologies are built with several layers of redundancy to ensure safety.

Q: What role do engineers have in shaping this technology responsibly?

Smith: As engineers, we have the ability to directly shape technology — but we often fall into the trap of only considering the feature set of what we’re building. But with great power comes great responsibility. Systems are actually used by people, and it’s very hard to predict how people are going to use them and how that will impact them psychologically. We can’t just build something because it’s cool or because we want to give computers new abilities. We have to be thinking about how new computer abilities are translating to people.

Q: What does that look like in practice?

Wu: We know how to build these guardrails in non-digital systems, like bridges. We still have a lot of research to do when it comes to building guardrails for digital systems that interact so much with human behavior. The speed of technological innovation adds another layer of difficulty. We need systems like data systems, operating systems, and web systems to have the right safeguards, too. 

At a minimum, that means ensuring that sensitive data can’t leak or be misused. Ideally, these safeguards accommodate the psychological effects of generative AI and other emerging technologies.

Smith: This is where we need to partner with folks from other fields, like psychologists, sociologists, and political scientists, and also keep up with current events. This way, we can understand the real-world ramifications of what we’re building and incorporate that into our design process. We should think of the effect on human experience as part of the system’s output, not just the raw, functional capabilities. The goal is to build systems that elicit a different, more positive outcome for people.

Wu: Many people focus on solutions within their area of expertise, but AI’s influence is so broad that it requires systems thinking. It’s about understanding how different components — both technological and societal — interact with each other. You can’t just build a better social network or pass a single law and expect to solve everything. It requires holistic solutions.

Stay up-to-date with the Columbia Engineering newsletter

*indicates required