Algorithms with Fairness Guarantees

Humans are increasingly relying on algorithms to make decisions. Depending on the data the algorithm is acting on, the output produced by the algorithm might inadvertently be biased towards some group in the input. Moreover, algorithms trained using real-world data might inadvertently amplify biases present in the training data. This could have serious social, ethical and legal consequences. Therefore, for various such problems, algorithms should be designed to ensure that their output is not biased towards any group in the input, while also optimizing the “cost” of the solution produced. In the last few years, we have been studying some such problems.

 

References:

Deval Patel, Arindam Khan, Anand Louis, “Group Fairness for Knapsack Problems.” AAMAS, pp 1001-1009,  2021.

Sruthi Gorantla, Amit Deshpande (Microsoft Research, Bangalore), Anand Louis., “On the Problem of Underranking in Group-Fair Ranking.”  ICML 2021.

Govind S. Sankar, Anand Louis, Meghana Nasre, Prajakta Nimbhorkar, “Matchings with Group Fairness Constraints: Online and Offline Algorithms.” IJCAI 2021

Faculty: Anand Louis, CSA
Click image to view enlarged version

Scroll Up