Paper Review 8

By: Chengyi (Jeff) Chen

Article: Designing Fair, Efficient, and Interpretable Policies for Prioritizing Homeless Youth for Housing Resources


What is the main problem/task addressed by the paper?

There are currently hundreds of thousands of homeless youth that are forced into living in emergency shelters / on the streets where they run a high risk of violence, substance abuse and sexual exploitation. This paper is interested in the problem of developing housing allocation models that maximize efficiency by directly increasing the expected number of stably housed youth at the end of the intervention, ensuring fairness by allowing policy-makers to tune the model according to the policy-maker’s fairness criteria, and finally allow an ease of interpretation of the reasons supporting the recommendations.


What was done before, and how does this paper improve on it?

The current policy ranks homeless youth / adults using the NextStep tool, but this method is highly unreliable because the homeless can be lying about their current situation in order to avoid trouble. The status-quo technique used for allocation of homes to homeless youth is the TAY prioritization, which unlike this research paper, does not take into account balancing efficiency, fairness, and interpretability. Furthermore, previous work in this field also mostly consider Kidney allocation. This paper improves upon the previous methods by 1. “propos(ing) an exact formulation of the allocation problem that enables us to guarantee fairness, while Bertsimas et al. use a heuristic method that cannot guarantee fairness”, 2. creating a “model (that) is exact, incorporating the order in which youth and housing resources arrive, to provide accurate prioritization”, 3. “consider larger classes of interpretable policies (e.g., based on decision trees).”, which translate into substantial empirical improvement.


What is the one cool technique/idea/finding that was learned from this paper?

I found the customizability of using different constraints in the MIP models pretty interesting. It seems like explicitly imposing constraints on a model is the best way shown to tackle fairness. The paper specifically builds optimal decision trees that account for fairness constraints using an MIP formulation instead of a heuristic.

Using simple machine learning models, they got slightly better results than TAY, but an increase in the unfairness, linear regression favouring allocation to White population more.

Ultimately, the paper gained efficiency and fairness at the same time, which is very rare because there are trade-offs between fairness and efficiency.


What part of the paper was difficult to understand?

With the abundance of symbols in the paper, it was a tad harder to keep track of what each of them meant in formultaing the MIP policies.

I’m not too experienced with MIP, and there was a section that stated “Next, we show that if F is polyhedral, Problem (2) can be solved as a mixedinteger linear optimization problem provided one can define the scores π using linear inequalities.”… I could barely understand why we were even trying to show that F is polyhedral in the first place.

TAY was not explained.


Brainstorming ideas: What generalization or extension of the paper could be done? Or what are other applications that can benefit from the described approach? What more could have been done in this application space that could leverage AI? Does this give you an idea about a possible project a student could do?

Because the key to ensuring fairness seems to stem from the constraints that is explictly imposed on the MIP models, I wonder if it is possible to impose those constraints on classical machine learning models as well in order to generalize this way of ensuring fairness in ML to other applications that cannot be solved by MIP, but require Deep Neural Networks and somehow incorporating the constraints as part of increasing the penalization during training to ensure fairness.