As Artificial Intelligence algorithms rise in popularity, it becomes increasingly vital to recognize the social problems that they carry with them. AI is taking a role in perpetuating societal biases and stereotypes. More specifically, in the field of Natural Language Processing (NLP), models have proved to propagate and might even amplify gender bias. An NLP system has presented gender bias in different segments such as training data, resources (i.e. text corpora), etc. The study of bias in AI is not new. Nonetheless, methods to diagnose and mitigate specifically gender bias in NLP are relatively new. In this work, I pretend to expose examples of gender bias in real life applications and present some of the proposals developed to mitigate aforementioned problems.