As the amount of data in the world has grown tremendously, we’ve had to invent new, foreign terminology to describe data - terabytes, petabytes, exabytes, and zettabytes are the common terms these days. Publicly available data is a treasure trove of information that may be used for public policy research, public perception analysis, governance, and much more. However, because of its volume, manually inspecting and comprehending this type of data has become nearly impossible. This is where NLP comes into play. Natural Language Processing (NLP) has been shown to be extremely effective at identifying human speech, comprehending natural language, and producing text that people can read and understand. Because of its ability, NLP is used to fill the gap in various strategic studies. In this talk, we will be exploring how to scrape non-traditional data, how to clean and preprocess this data, and how to extract and generate insights, coupled with data visualization, all using python.