Machine Learning (ML) models support a range of human decisions in a multitude of domains in both the private and public sector. As the complexity of the applications have grown, the algorithms have become more complex, essentially rendering them to be used as black-boxes. However, black-box models entail the risks of surfacing erroneous data or spurious correlations that may lead to errant and/or biased decisions. As a result, “explainability/interpretability” has become a highly desired feature in ML models we deploy in the real world. In this talk, we will overview the different “explainability” needs that exist in human-ML decision making systems, how the ML research community has responded to those needs, what questions we still need to answer in the field, and what role practitioners can play in progressing explainable ML.