Fairness For Whom?

When defining fairness, we imply impartiality. But the trolley problem motivates us to evaluate fairness. Is fairness simply saving the most souls? Avoiding having to push someone off the bridge versus the heroism of pulling the lever? Or is fairness pre-determined by our individual values?

As Coeckelbergh describes, algorithmic morality causes us to have to make these decisions before we encounter them and are often presented independent of emotion. But in evaluating an algorithm’s efficacy, we must ask about fairness to whom. To do this we can lean on Bateson’s notions of contextual framing. If we add the emotional framing of relationship to the trolley problem, we might motivate increasingly clearer, but not necessarily fairer outcomes. If the person in the utilitarian spur is a family member, would we be more likely to sacrifice the five others?

Reinforcement learning, which leans on pre-existing human answers to such dilemmas, allows us to inform the algorithm with trained prior emotional responses, simulating a more human determination of good and bad. If our data indicates most would pull the lever but not push the worker, this isn’t impartial fairness, but it is at best fairness of scale according to the answers we have.


Previous
Previous

Confidence Is Contagious