Skip to content

mggg/Quantitative_fairness

 
 

Repository files navigation

This repository accompanies the paper "Quantitative Relaxations for Arrow's Axioms".

Repository Structure

The repository contains the following core files:

setup.sh Sets up the environment for the project, including installing necessary dependencies.

fairness_metric.py

Implements:

  • The Kendall Tau distance function
  • Our quantitative fairness metrics:
    • $\sigma_{IIA}$ (Independence of Irrelevant Alternatives)
    • $\sigma_{UM}$ (Unanimity)
    • $\sigma_{IIA}^{WS}$ (The "Winner Set" version of IIA)
    • $\sigma_{UM}^{WS}$ (The "Winner Set" version of Unanimity)

voting_rules.py A factory file that generates the appropriate voting rule using VoteKit according to a string input.

run_BT_pipeline.sh Runs the pipeline for the Bradley-Terry model.

run_ny_pipeline.sh Runs the pipeline for the New York dataset.

run_portland_pipeline.sh Runs the pipeline for the Portland dataset.

run_scottish_pipeline.sh Runs the pipeline for the Scottish dataset.

data/ Contains the raw and cleaned data files for the experiments. Populated by the setup.sh script.

pipelines/ Contains the pipeline files for each dataset. These files are run by the corresponding run_*_pipeline.sh scripts in the root directory.

stats/ Contains the statistics files generated by the pipelines.

plots/ Contains the plots generated by the pipelines.

other_files/ Contains other files that were either used in earlier notebooks or which are used to help clean the data at setup.

notebooks/ Contains some Jupyter notebooks and old drafts of work that appear in the paper.

Setup

This repo uses uv to manage dependencies. Once installed, you can simply run:

./setup.sh

from your terminal to set up the environment and extract all the necessary data for replication.

Running the Experiments

To repeat the experiments in the paper, you just need to run the pipeline files:

NOTE: The file titled run_BT_pipeline.sh will take a while to run on a computer without a significant number of CPUs. The commands below are ordered by the time they take to run.

./run_ny_pipeline.sh
./run_portland_pipeline.sh
./run_scottish_pipeline.sh
./run_BT_pipeline.sh

Data Sources

Data retrieval is done in the setup.sh file. The data sources are:

About

This is the code repository for the paper "Quantitative Relaxations for Arrow's Axioms".

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 89.7%
  • Python 10.0%
  • Shell 0.3%