Designing AI Algorithms to Make Fair Decisions in Auctions, Pricing, and Marketing

NSF/Amazon FAI grant will support IEOR team’s work to ensure that personalized targeting is fair

Mar 28 2022 | By Holly Evarts
Shipra Agrawal, Eric Balkanski, Rachel Cummings, Adam Elmachtoub, and Christian Kroer

By now, we’re all used to personalized recommendations, whether it’s from Amazon, Netflix, or Google. The use of AI systems in business settings has grown tremendously, thanks to access to increasing amounts of consumer data and the ability to not only personalize but also run algorithms in real-time so the targets—us—get the latest pitch. Algorithms all use data, especially ones for advertising, pricing, and marketing. And there’s a catch: Since our data can reflect our demographics like gender and race, the algorithms can potentially discriminate by showing more advantageous ads, better prices, or better opportunities to some groups of people over others.

Researchers from the industrial engineering and operations research (IEOR) department, have been working on this challenge and just won a grant from the National Science Foundation Program on Fairness in Artificial Intelligence in Collaboration with Amazon (FAI) to design algorithms in these business settings that are simultaneously both fair and profitable. The team includes Shipra Agrawal, an expert in adaptive algorithms for learning from interactions and for making sequential decisions, including online learning techniques like reinforcement learning; Eric Balkanski, a leader in social network analysis and discrete optimization; Rachel Cummings, who focuses on designing privacy-preserving algorithms for machine learning and other data-driven systems; Adam Elmachtoub, who focuses on data-driven optimization in e-commerce and service systems; and Christian Kroer, who develops optimization algorithms and theory for large-scale markets such as those used in internet advertising.

Historically, discrimination has been exposed in many applications, for instance, discriminatory targeting in housing and job ad auctions, preferential personalized pricing for mortgages and other loans, and unequal treatment of social network users through marketing campaigns that exclude certain protected groups. “We decided to focus on providing new frameworks and AI algorithms for conducting business fairly in the three central business domains that rely heavily on AI: auctions, pricing, and marketing.”

Ads shown to users are personalized since advertisers are willing to bid more in auctions for users with certain demographic features. Pricing decisions on ride-sharing platforms or interest rates on loans are customized to the consumer’s characteristics in order to maximize profit. Marketing campaigns on social media platforms target users based on the ability to predict who they will be able to influence in their network.

While moral, ethical, and legal considerations are prompting firms and regulators to ensure that business practices do not discriminate against protected groups of people, there are a number of challenging issues in designing fairness metrics and algorithms in AI systems.

Existing algorithmic fairness criteria such as individual or statistical notions of parity have been developed for learning problems and are not well-defined in the context of decision making problems, so they often cannot be directly imported to business settings involving multiple parties and utility functions. In addition, current machine learning-fairness research focuses on balancing fairness with prediction accuracy, which is different from the relevant business metrics such as revenue and product adoption. And important business aspects of competition and long-term market-share are completely missing from existing discussions on trade-offs between fairness and learning objectives.

The team’s approach for this FAI grant is to consider the three aspects of the decision-making pipeline in auctions, pricing, and marketing. They will explore the new types of algorithmic fairness criteria necessary for these domains, and design novel algorithms that can practically incorporate these fairness considerations in real-world large-scale systems. For each of the business contexts, they will examine how data can and should be collected in order to induce fair outcomes in the downstream decision-making task. And, finally, they will consider how incorporating fairness measures—or not—can positively or negatively affect the long-term impact of firms and consumers, especially in the presence of competition.

Stay up-to-date with the Columbia Engineering newsletter

* indicates required