When a Small Change Makes a Big Difference: Algorithmic Fairness Among Similar Individuals

Jane R. Bambauer, Tal Zarsky, Jonathan Mayer

Research output: Contribution to journalArticlepeer-review


If a machine learning algorithm treats two people very differently because of a slight difference in their attributes, the result intuitively seems unfair. Indeed, an aversion to this sort of treatment has already begun to affect regulatory practices in employment and lending. But an explanation, or even a definition, of the problem has not yet emerged. This Article explores how these situations—when a Small Change Makes a Big Difference (SCMBDs)—interact with various theories of algorithmic fairness related to accuracy, bias, strategic behavior, proportionality, and explainability. When SCMBDs are associated with an algorithm’s inaccuracy, such as overfitted models, they should be removed (and routinely are.) But outside those easy cases, when SCMBDs have, or seem to have, predictive validity, the ethics are more ambiguous. Various strands of fairness (like accuracy, equity, and proportionality) will pull in different directions. Thus, while SCMBDs should be detected and probed, what to do about them will require humans to make difficult choices between social goals.
Original languageEnglish
Number of pages83
JournalUniversity of California, Davis, Law Review
Issue number21-23
StatePublished - 2022


Dive into the research topics of 'When a Small Change Makes a Big Difference: Algorithmic Fairness Among Similar Individuals'. Together they form a unique fingerprint.

Cite this