Faculty & Staff

Cybersecurity Expert Simha Sethumadhavan on the State of AI Cyber Regulation

Sethumadhavan joins high-level officials and leading researchers at the Cyber Regulation and Harmonization Conference

November 13, 2024
Holly Evarts

Cybersecurity expert Simha Sethumadhavan is best known for his "hardware-up" principle for designing secure systems. This new way to engineer systems uses hardware to not only make security faster and more efficient, but also to improve the effectiveness of security solutions by improving the security and trust of both hardware and software. 

Image
Simha Sethumadhavan
Simha Sethumadhavan, professor of computer science

A panelist at the Nov. 13-14 Cyber Regulation and Harmonization Conference, co-sponsored by Columbia Engineering and organized by New York State and the School of International and Public Affairs (SIPA), Sethumadhavan will discuss cyber AI regulations and how to prevent AI from helping malicious hackers and stopping such groups from hacking the AI and its training data. 

The conference is focused on cybersecurity and AI safety regulations, and especially on harmonizing these regulations among international, federal, and state entities. The event, held on campus at SIPA, features keynotes by high-level officials and panels on the latest in cyber regulation.

Sethumadhavan, professor of computer science at Columbia Engineering and a member of the Data Science Institute, talked to us about the state of cyber regulation today and what’s next from an engineering perspective. 

How has the landscape of cyber regulation evolved in recent years, especially with the rise of AI?

The most recent push has been towards regulating AI safety. Policymakers are concerned about the development and distribution of AI being controlled by undemocratic entities, protecting the training of AI from hackers, and creating opportunities for everyone to access AI systems and participate in their development, and finally, the fairness and “value systems” of the AI itself.

How does your work impact cyber regulations?

I am part of a small but growing group of technologists who are interested in policy and regulations. I am a big believer in providing incentives for innovators to prioritize safety and security in a manner they think will be most beneficial. Our research has shown that agreed-upon mandates help improve cybersecurity more than letting vendors set their own performance goals and benchmarks without considering cybersecurity or transferring risk to third parties. 

However, mandates in the form of checklists are cumbersome, onerous, and ineffective, particularly for emerging technologies. 

Our research on resource-based mandates offers a better solution than checklists-based security: the basic idea is for innovators to agree to spend a certain amount of resources towards safety/security and disclose how exactly they spent their budget. As long as they meet the budget and disclosure requirements, they are free to innovate in whatever way they wish. Basically, we want to know that they have done their bit towards safety and security, and we want them to think about this from the get-go instead of fixing a problem when problems arise.

I arrived at these ideas based on years of frustration on serving on standard-making bodies in my area and seeing them repeatedly stay behind the curve in terms of threats and the paralysis towards decision-making because of misalignment of incentives for the various stakeholders.

How is this approach being accepted by elected officials? 

There is a lot of interest in this topic. The main concern is what we can do to deter bad actors without burdening good actors. The requirements and expectations also need to be tailored for the scale of organizations and based on the expected end use of the technology.

What do you think are the most critical areas for regulatory focus in the next 5-10 years?

The most important is providing incentives for technology creators to do the right thing from the get-go instead of waiting for bad things to happen and then reacting. With social media, first there was under-regulation, and then a set of norms evolved after significant harms and hardships were caused, and now there is a patchwork of regulations. 

What do you see coming down the road and what do we need to be prepared for?

Web browsers and databases offer a guide for how AI might evolve. We see a mix of proprietary and open source models ecosystems here with winners and losers decided based on how well the technology is tailored to end user/customer needs, and based on their model for commercialization. History has also repeatedly taught us that making something open source (or open weights in the case of AI, or declaring the data mix and alignment recipe) is necessary but not sufficient to remove vulnerabilities and engender trust. Norms in this area will evolve over time, but there needs to be leadership to incentivize self-governance and to ensure that norms do not devolve to the lowest common denominator of options to create secure, legal, safe, and democratic AI. 


Lead Photo Caption: Simha Sethumadhavan (center) with students 

Photo Credit: Jeffrey Schifman/Columbia Engineering

Stay up-to-date with the Columbia Engineering newsletter

*indicates required