A Columbia Engineering Student on What's Next for Artificial Intelligence

A PhD candidate who worked for OpenAI and Apple discusses natural language processing, AI hallucinations, and deep fakes.

By
Christopher D. Shea
April 14, 2023

After growing up in rural Vermont and attending Williams College in western Massachusetts, Melanie Subbiah, made the leap into city living when she decamped for San Francisco, where she interned at Meta and worked for Apple and OpenAI. Subbiah returned to the East Coast in 2019 to join Columbia Engineering’s computer science department as a doctoral candidate. At Columbia, Subbiah is looking into how to develop natural language processing technologies–a subset of AI–to help summarize long texts, ranging from novels to meeting minutes. Columbia News caught up with Subbiah to discuss her path to Columbia, whether AI scares her, and what she thinks these technologies could mean for society and for the job market.

When did you first get interested in computer science? 

I always loved math growing up, but I also loved reading and writing. When I was thinking about college, it wasn’t really clear to me what I wanted to study, but I knew I wanted to go somewhere where I could continue to explore both math and science and the humanities.

You worked at OpenAI and Apple and interned at Meta before coming to Columbia. What made you want to get a PhD at Columbia?

To have five years that are largely your own to develop in the ways that you’re interested in developing and to have the freedom to read broadly and figure out who you are as a technical thinker and leader was appealing to me. I also wanted to step back from the tech sector and think about what kind of role I wanted to play in the way that AI is developing, and what kind of leader I wanted to be.

I came to Columbia because of New York and because of my adviser, Kathy McKeown. I’ve seen friends go through PhDs where they had a difficult relationship with their adviser and it makes such a huge difference, because that's the main person that you're interacting with professionally for many years of your life. And so it was huge to me to find someone who I felt like could support me as a human and not just a PhD student. 

You worked in the past on the question of how AI could be used to identify malicious, persuasive texts. Can you explain what that research was about?

We were trying to see if we could use AI to identify when online text had malicious intent to spread bad information. It’s a complicated problem, because you can’t always tell what the intent behind something is just by looking at the text itself. So we tried to find ways to develop markers that identified specific propaganda techniques like scapegoating or bandwagoning, to recognize whether something might be nefarious. We found that it’s quite hard to do, and it’s more feasible when you also have information about the actual source and how they’re distributing information than when you’re just trying to analyze text in isolation.

What are you looking at in your dissertation?

I’ve decided to pivot topics and to go back to my original passion that got me into natural language processing, which is my love of creative writing and reading. I’m planning on focusing on summarization, and looking at how natural language processing tools can (and can’t) identify critical parts of texts and paraphrase and reword them in a meaningful way.

I’m going to look at long texts like novels. This kind of research has only become feasible recently because of how the technology has evolved to be able to process long documents well. There’s more in this space now, since people are interested in how AI tools could help summarize all kinds of very long documents, like meeting minutes, government reports, or financial documents.

One thing that we look at is how to solve for a phenomenon known as hallucinations, when an artificial intelligence introduces facts into a summary that are not present in or are different from the source document. Recent large language models, like GPT-4 (the latest OpenAI technology, now available to the public by paid subscription only), tend to have fewer hallucinations than earlier models but they are often harder to spot because the model is so convincing.

Do you use ChatGPT to make your own work more efficient?

I'm still exploring ways to do that. Funnily enough, I've actually always been kind of a slow adopter of technology even though I love computer science. I still love to read physical books; I use a physical notebook when I'm writing down notes. But I have found some different useful ways that I like interacting with ChatGPT. I was making a presentation the other day and I wanted to put some plots in and typically writing plotting code in Python is sort of annoying boilerplate that you have to remember and look up. So I just asked ChatGPT to write it for me, and I very quickly had a beautiful plot that showed exactly what I wanted. Things like that have been helpful.

What can you tell us about how to spot deep fakes, images that look real but were generated with artificial intelligence? 

There are often subtle artifacts, or “tells,” that you can notice like odd lighting and shadows or strange poses or blurring on complex features like hands or hair. It’s a bit easier to detect in the image space than in written language. Language uses discrete units (words) which are either valid words or not, so there’s not really a good way at this point to detect what’s human generated versus what’s not when the AI is using a meaningful sequence of words. Although, that said, these chatbots do sometimes have a specific style that you can pick up on.

You’re a fiction lover. Do you worry these tools may spell the end of the writer?

I’ve actually been less concerned about that because so much of what I love about a book or a story is personal to who the author is, and it's something that the author actually wants to say. There’s something different about a story that's written by an AI versus a story that's written by an author. I also think there's lots of ways for authors to collaborate with AI tools. It doesn’t have to be this either-or situation.

I've actually felt more sadness thinking about things like coding. A lot of people, myself included, get into computer science and coding because they just really enjoy that as a day-to-day activity. That's going to change a lot in terms of what programming we actually need a human to do versus what you can accomplish with an AI system interpreting the natural language that you speak every day. 

Do you worry about the emergence of artificial general intelligence (AGI), an AI whose intelligence exceeds that of humans, and that may decide it wants to do something very bad, like wipe humanity out?

I'm less concerned about this idea of really harmful AGI overall, because there would have to be a lot of human decisions that go wrong first before we get to a position where AGI is doing something terrible. 

Regulation is super important to how AI is going to evolve over the next decade. But I am concerned about the pace of regulation and whether people who are in a position to make a difference in that space know enough about what's going on and can onboard that information quickly enough to make a difference. 

What’s a takeaway that you want to leave readers with, as someone who has worked in the AI sector and also taken a step back to think about it more broadly as a doctoral candidate?

First, people need to be aware that systems like ChatGPT and powerful language models are going to continue to develop quickly and they're going to make these jumps forward and demonstrate capabilities that are hard to predict.

Secondly, it's important to build intuition around how these systems are working, because oftentimes, if you're just interacting with them briefly, they will appear human-like and it's easy to then ascribe human intention and reasoning and thought to what that system is doing until you start to notice subtle quirks that are indicating that something a little bit different is going on. I think that is something that is important to think about as we wait for societal regulation and societal adaptations to catch up: As consumers of this technology, always be mindful and skeptical and have this understanding that this is an AI statistical model, and it's not a human agent that you're interacting with.

Artificial intelligence, ChatGPT, and other natural language processing technologies are obviously interesting to you. Do they scare you?

The technology itself is not scary to me. I'm excited by all of the things that it can do. But what does make me nervous is how humans are going to use these systems and how quickly society can adapt to a big technological shift like this. The economic impacts of this as job markets change quickly concerns me, as does the fact that quite a bit of power is being consolidated in the hands of just a few tech sector players.

You worked at OpenAI in 2019 and 2020. When OpenAI rolled out ChatGPT publicly late last year, did it surprise you how much the technology had advanced since you last worked on it?

I was not surprised by how good ChatGPT was or by how fascinating it was to the world because I remember when I was working on GPT-3 (a predecessor to ChatGPT), I had that same feeling of, “oh, wow, this is something really different.” There are some new technologies that have been incorporated since then, including one called reinforcement learning with human feedback, or RLHF, which helps the model to not only have these different capabilities with language, but also to be much more aligned with what a human wants in a particular response. It’s much easier to get a meaningful response now compared to when I was working on GPT-3, and we would have to prompt the system with many examples to get the kind of response we wanted.