A friendly tutorial on getting zip codes and other geographic data from street addresses.
Knowing how to deal with geographic data is a must-have for a data scientist. In this post, we will play around with the MapQuest Search API to get zip codes from street addresses along with their corresponding latitude and longitude to boot!
The Scenario
In 2019, my friends and I participated in CivTechSA Datathon. At one point in the competition, we wanted to visualize the data points and overlay them on San Antonio’s map. The problem is, we had incomplete data. Surprise! All we had were a street number and a street name — no zip code, no latitude, nor longitude. We then turned to the great internet for some help.
We found a great API by MapQuest that will give us exactly what we needed. With just a sprinkle of Python code, we were able to accomplish our goal.
Today, we’re going to walk through this process.
The Data
To follow along, you can download the data from here. Just scroll down to the bottom tab on over to the Data Catalog 2019. Look for SAWS (San Antonio Water System) as shown below.
Below are two functions that call the API and returns geo data.https://towardsdatascience.com/media/3ec6009e8b6069387a9edde18bdad0d3
We can manually call it with the line below. Don’t forget to replace the ‘#####’ with your own API key. You can use any address you want (replace spaces with a + character).
Finally, let’s create a dataframe that will house the street addresses — complete with zip code, latitude, and longitude.https://towardsdatascience.com/media/adfcc23ff94f54877bc80b72e2537ed9
A hands-on introduction to Microsoft Analytics Tool
As a data scientist, you’ll need to learn to be comfortable with analytics tools sooner or later. In today’s post, we will dive headfirst and learn the very basics of Power BI.
Be sure to click on the images to better see some details.
The Data
The dataset that we will be using for today’s hands-on tutorial can be found at https://www.kaggle.com/c/instacart-market-basket-analysis/data. This dataset is “a relational set of files describing customers’ orders over time.” Download the zip files and extract them to a folder on your local hard drive.
In this case, Power BI Desktop is smart to infer the two relationships. However, most of the time, we will have to create the relationships ourselves. We will cover this topic in the future.
Let’s go back to the Report View and examine the “Visualizations” panel closely. Look for the “slicer” icon which looks like a square with a funnel at the bottom right corner. Click on it to add a visual to the report.
This will cause the “department_id” field to appear under the “Visualizations” panel in the “Field” box.
Next, take your mouse cursor and hover over the top right corner of the visual in the Report View. Click on the three dots that appeared in the corner as shown below.
While the “department_id” visual is selected, you should see corner marks indicating the visual as the active visual. While the “department_id” is active, press CTRL+C to copy it and then CTRL+V to paste it. Move the new visual to the right of the original visual.
Make the second visual active by clicking somewhere inside it. Then look for the “aisle_id” field in the “Fields” panel on the right of Power BI Desktop as shown below.
Let’s create another visual by copying and pasting the table visual. This time, select (or drag) the following fields to the “Values” box of the visual.
order_id
user_id
order_number
order_hour_of_day
order_dow
days_since_prior_order
Power BI Desktop should now look similar to the image below.
Try clicking one of the selections in the table visual (where it’s showing “product_id” and “product_name”) and observe how the table on the right changes accordingly.
Make sure the “Axis” box contains “order_dow” and the “Values” box with “order_id” respectively. Power BI Desktop should automatically calculate the count for “order_id” and display the field as “Count of order_id” as shown below.
This last graph is the most interesting because the number of reorders peaks during these three time periods: 7 days, 14 days, and 30 days since prior order. This means that people are in a habit of resupplying every week, every two weeks, and every month.
That’s it, folks!
In the next article, we will “prettify” our charts and make them more readable to the others.
The procedures above have been drawn out. But if you’re a novice Power BI user, don’t despair! With regular practice, the concepts demonstrated in this article will soon become second nature and you’ll probably be able to do them in your sleep.
Next, a visual placeholder will appear. Drag the hot corner marking on the lower right-hand corner of the placeholder and drag it diagonally down and to the right corner of the main working area.
Let’s change the Forecast length to 31 points. In this case, a data point equals a day so 31 would roughly equate to a month’s worth of predictions. Click on “Apply” on the lower right-corner of the Forecast group to apply the changes.
What if we wanted to compare how the forecast compares to actual data? We can do this with the “Ignore last” setting.
For this example, let’s ignore the last 3 months of the data. Power Bi will then forecast 3 months worth of data using the dataset but ignoring the last 3 months. This way, we can compare the Power BI’s forecasting result with the actual data in the last 3 months of the dataset.
Let’s click on “Apply” when we’redone changing the settings as shown below.
Below, we can see how the Power BI forecasting compares with the actual data. The black solid line represents the forecasting while the blue line represents the actual data.
The solid gray fill on the forecasting represents the confidence interval. The higher its value, the large the area will be. Let’s lower our confidence interval to 75% as shown below and see how it affects the graph.
Next, let’s take into account seasonality. Below, let’s set it to 90 points which is equivalent to about 3 months. Putting this value will tell Power BI to look for seasonality within a 3-month cycle. Play with this value with what makes sense according to the data.
PyCaret is an open source, low-code machine learning library in Python that allows you to go from preparing your data to deploying your model within seconds in your choice of notebook environment.¹
PyCaret
PyCaret does a lot more than NLP. It also does a whole slew of both supervised and unsupervised ML including classification, regression, clustering, anomaly detection, and associate rule mining.
Let’s begin by installing PyCaret. Just do pip install pycaret and we are good to go! Note: PyCaret is a big library so you may want to go grab a cup of coffee while waiting for it to install.
Also, we need to download the English language model because it is not automatically downloaded with PyCaret:
Let’s read the data into a dataframe. If you want to follow along, you can download the dataset here. This dataset contains Trump’s tweets from the moment he took office on January 20, 2017 to May 30, 2020.
import pandas as pd
from pycaret.nlp import *
df = pd.read_csv('trump_20200530.csv')
Let’s check the shape of our data first:
df.shape
And let’s take a quick look:
df.head()
For expediency, let’s sample only 1,000 tweets.
# sampling the data to select only 1000 tweets df = df.sample(1000, random_state=493).reset_index(drop=True) df.shape
PyCaret’s setup() function performs the following text-processing steps:
Removing Numeric Characters
Removing Special Characters
Word Tokenization
Stopword Removal
Bigram Extraction
Trigram Extraction
Lemmatizing
Custom Stopwords
And all in one line of code!
It takes in two parameters: the dataframe in data and the name of the text column that we want to pass in target. In our case, we also used the optional parameters session_id for reproducibility and custom_stopwords to reduce the noise coming from the tweets.
After all is said and done, we’ll get something similar to this:
In the next step, we’ll create the model and we’ll use ‘lda’:
Above, we created an ‘lda’ model and passed in the number of topics as 6 and set it so that the LDA will use all CPU cores available to parallelize and speed up training.
Finally, we’ll assign topic proportions to the rest of the dataset using assign_model().
Thank you for reading! Exploratory data analysis uses a lot of techniques and we’ve only explored a few on this post. I encourage you to keep practicing and employ other techniques to derive insights from data.
Opinionated advice for the rest of us. Love of math, optional.
Since my article about my journey to data science, I’ve had a lot of people ask me for advice regarding their own journey towards becoming a data scientist. A common theme started to emerge: aspiring data scientists are confused about how to start, and some are drowning because of the overwhelming amount of information available in the wild. So, what’s another, right?
Well, let’s see.
I urge aspiring data scientists to slow it down a bit and take a step back. Before we get to learning, let’s take care of some business first: the fine art of reinventing yourself. Reinventing yourself takes time, so we better get started early on in the game.
In this post, I will share a very opinionated approach to do-it-yourself rebranding as a data scientist. I will assume three things about you:
You’re broke, but you’ve got grit.
You’re willing to sacrifice and learn.
You’ve made a conscious decision to become a data scientist.
Let’s get started!
First Things First
I’m a strong believer in Yoda’s wisdom: “Do or do not, there is no try.” For me, either you do something or you don’t. Failure for me was not an option, and I took comfort in knowing that I won’t really fail unless I quit entirely. So first bit of advice: don’t quit. Ever.
Do or do not, there is no try.
Yoda
Begin with the End in Mind
Let’s get our online affairs in order and start thinking about SEO. SEO stands for search engine optimization. The simplest way to think about is the very fine art of putting as much “stuff” as you can on the internet with your real professional name out there so that when somebody searches for you, all they will find are the stuff that you want them to find.
In our case, we want the words “data science” or “data scientist” to appear whenever your name appears in the search results.
So let’s start littering the interweb!
Create a professional Gmail account if you don’t already have one. Don’t make your username be sexxydatascientist007@gmail.com. Play it safe, the more boring, the better. Start with first.last@gmail.com, or if your name is a common one, append it with “data” like first.name.data@gmail.com. Avoid numbers at all costs. If you have one already, but it doesn’t follow the aforementioned guidelines, create another one!
Create a LinkedIn account and use your professional email address. Put “Data Scientist in Training” in the headline. “Data Science Enthusiast” is too weak. We’ve made a conscious decision and committed to the mission, remember? While we’re at it, let’s put the app on our phone too.
If you don’t have a Facebook account yet, create one just so you could claim your name. If you already have one, put that thing on private pronto! Go the extra mile and also delete the app on your phone so you won’t get distracted. Do the same for other social networks like Twitter, Instagram, and Pinterest. Set them to private for now, we’ll worry about cleaning them up later.
Create a Twitter account if you don’t already have one. We can take a little bit of leeway in the username. Make it short and memorable but still professional, so you don’t offend anybody’s sensibilities. If you already have one, decide if you want to keep it or start all over. The main thing to ask yourself: is there any content in your history that can be construed as unprofessional or mildly controversial? Err on the side of caution.
Start following the top voices in data science on LinkedIn and Twitter. Here are a few suggestions: Cassie Kozyrkov, Angela Baltes, Sarah N., Kate Strachnyi, Kristen Kehrer, Favio Vazquez, and of course, my all-time favorite: Eric Weber.
Create a Hootsuite account and connect your LinkedIn and Twitter accounts. Start scheduling data science-related posts. You can share interesting articles from other people about data science or post about your own data science adventures! If you do share other people’s posts, please make sure you give the appropriate credit. Simply adding a URL is lazy and no bueno. Thanks to Eric Weber for this pro-tip!
Take a professional picture and put it as your profile picture in all of your social media accounts. Aim for a neutral background, if possible. Make sure it’s only you in the picture unless you’re Eric (he’s earned his chops so don’t question him! LOL.)
Create a Github account if you don’t have one already. You’re going to need this as you start doing data science projects.
BONUS: if you can spare a few dollars, go to wordpress.org and get yourself a domain that has your professional name on it. I was fortunate enough to have an uncommon name, so I have ednalyn.com, but if your name is common, be creative and make one up that’s recognizably yours. Maybe something like janesmithdoesdatascience.com. Then you can start planning on having your resumé online or maybe even have a blog post or two about data science. As for me, I started with writing my experience when I first started to learn data science.
Clean-up: when time permits, start auditing your social media posts for offensive, scandalous, or unflattering content. If you’re looking to save time, try a service like brandyourself.com. Warning! It can get expensive, so watch where you click.
Do Your Chores
No kidding! When you’re doing household chores, taking a walk, or maybe even while driving, listen to podcasts that talk about data science topics like Linear Digression and TwiML. Don’t get too bogged down about committing what they say to memory. Just go along with the flow, and sooner or later, the terminology and concepts that they discuss will start to sound familiar. Just remember not to get too caught up with the discussions that you start burning whatever you’re cooking or miss your exit like I have many times in the past.
Meat and Potatoes
Now that we’ve taken care of the preliminaries of living and breathing data science, it’s time to take care of the meat and potatoes: actually learning about data science.
There’s no shortage of opinions about how to learn data science. There are so many of them that it can overwhelm you, especially when they start talking about learning the foundational math and statistics first.
Blah!
Tell me and I forget, teach me and I remember, involve me and I learn.
Old Chinese Adage
While important, I don’t see the point of studying theory first when I may soon fall asleep or worst, get too intimidated by the onslaught of mathematical formulas that I get so exasperated, and ended up quitting!
What I humbly propose, rather, is to employ the idea of “minimum viable knowledge” or MVK as described by Ken Jee. in his article: How I Would Learn Data Science (If I Had to Start Over). Ken Jee describes minimum viable knowledge as learning “just enough to be able to learn through doing.”² I suggest checking it out:
My approach to MVK is pretty straight-forward: learn just enough SQL to be able to get the data from a database, learn enough Python so that you could have program control and be able to use the pandas library, and then do end-to-end projects, from simple ones to increasingly more challenging ones. Along the way, you’d learn about data wrangling, exploratory data analysis, and modeling. Other techniques like cross-validation and grid search would surely be a part of your journey as well. The trick is never to get too comfortable and always push yourself slowly.
To the list-oriented, here is my process:
Learn enough SQL and Python to be able to do end-to-end projects with increasing complexity.
For each project, go through the steps of the data science pipeline: planning, acquisition, preparation, exploration, modeling, delivery (story-telling/presentation). Be sure to document your efforts on your Github account.
For each iteration, I suggest doing an end-to-end project that practices each of these following data science methodologies:
regression
classification
clustering
time-series analysis
anomaly detection
natural language processing
distributed ML
deep learning
And for each methodology, practice its different algorithms, models, or techniques. For example, for natural language processing, you might want to practice these following techniques:
n-gram ranking
named-entity recognition
sentiment analysis
topic modeling
text classification
Just Push It
As you do end-to-end projects, it’s a good practice to push your work publicly on Github. Not only will it track your progress, but it also backups your work in case your local machine breaks down. Not to mention, it’s a great way to showcase your progress. Note that I said progress, not perfection. Generally, people understand if our Github repositories are a little bit messy. In fact, most expect it. At a minimum, just make sure that you have a great README.md file for each repo.
What to put on a Github Repo README.md:
Project name
What goal or purpose of the project
Background on the project
How to use the project (if somebody wants to try it for themselves)
Mention your keywords: “data science,” “data scientist,” “machine learning,” et cetera.
Don’t ignore this note: don’t make the big mistake or hard-coding your credentials or any passwords in your public code. Put them in an .env file and .gitignore them. For reference, check out this documentation from Github.
And finally, as you get better with employing different techniques and you begin to do hyper-parameter tuning, I believe at this point that you’re ready to face the necessary evil that is math. And more than likely, the more you understand and develop intuition, the less you’ll hate it. And maybe, just maybe, you’ll even grow to love it.
I have one general recommendation when it comes to learning the math behind data science: take it slow. Be gentle on yourself and don’t set deadlines. Again, there’s no sense in being ambitious and tackling something monumental if it ends up driving you insane. There’s just no fun in it.
There are generally two approaches to learning math.
One is to take the structured approach, which starts on learning the basics first and then incrementally take on the more challenging parts. For this I recommend KhanAcademy. Personalize your learning towards calculus, linear algebra, and statistics. Take small steps and celebrate small wins.
The other approach is slightly geared for more hands-on involvement and takes a little bit of reverse engineering. I call it learning backward. You start with finding out what math concept is involved in a project and breaking down that concept into more basic ideas and go from there. This approach is better suited for those who prefer to learn by doing.
A good example of learning by doing is illustrated by a post on Analytics Vidhya.
Well, learning math sure is hard! It’s so powerful and intense that you’d better take a break often or risk overheating your brain. On the other hand, taking a break does not necessarily mean taking a day off. After all, there is no rest for the weary! Every once in a while, I strongly recommend supplementing your technical studies with a little bit of understanding of the business side of things. For this, I suggest the classic book: Thinking with Data by Max Shron. You can also find a lot of articles here on Medium.
Taking a break can be lonely sometimes, and being alone with only your thoughts can be exhausting. So you may decide to finally talk with your family, the problem is, you’re so motivated and gung-ho about data science that it’s all you can talk about. Sooner or later, you’re going to annoy your loved ones.
It happened to me.
This is why I decided to talk to other people with similar interests. I went on Meetups and started networking with people who are either already practicing data science or people like you who are aspiring to be a data scientist as well. In this post-COVID (hopefully) age that we’re in, having group video calls are more prevalent. This is actually more beneficial because now, geography won’t be an issue anymore.
A good resource to start is LinkedIn. You can use the social network to find others with similar interests or even find local data scientists who can still spare an hour or two every month to mentor motivated learners. Start with companies in your local municipality. Find out if they have a data scientist that works there, and if you do find one, kindly send them a personalized message with a request to connect. Give them the option to refuse gracefully and just ask them to repoint or recommend you to another person who does have the time to mentor.
The worst that can happen is they said no. No hard feelings, eh?
Conclusion
Thanks for reading! This concludes my very opinionated advice on rebranding yourself as a data scientist. I hope you got something out of it. I welcome any feedback. If you have something you’d like to add, please post it in the comments or responses.
Let’s continue this discussion!
If you’d like to connect with me, you can reach me on Twitter or LinkedIn. I love to connect, and I do my best to respond to inquiries as they come.
Stay tuned, and see you in the next post!
If you want to learn more about my journey from slacker to data scientist, check out this article.
[1] Quote Investigator. (June 10, 2020). Tell Me and I Forget; Teach Me and I May Remember; Involve Me and I Learn. https://quoteinvestigator.com/2019/02/27/tell/
[2] Towards Data Science. (June 11, 2020). How I Would Learn Data Science (If I Had to Start Over). https://towardsdatascience.com/how-i-would-learn-data-science-if-i-had-to-start-over-f3bf0d27ca87
This article was first published in Towards Data Science’ Medium publication.
A visual step-by-step guide to replacing the default terminal application with iTerm2.
Over the weekend, I’ve decided to restore my Macbook Pro to factory settings so I can have a clean start at setting up a programming environment.
In this post, we’ll work through setting up oh-my-zsh and iTerm2 on the Mac.
This is what the end-result will look like:
The end-result.
Let’s begin!
Press CMD + SPACE to call the spotlight service.
Start typing in “terminal” and you should see something similar below.
Hit the enter key (gently, of course) to open the terminal application.
If you see something that says “The default interactive shell is now zsh…” it means you’re still using bash as your shell.
Let’s switch to zsh.
Click on “Terminal” and select “Preferences…” as shown below.
This will open up the terminal settings window.
In the “Shells open with” section, click on “Default login shell” as shown below.
Close the window by click on the “X” t the top left-hand corner and then restart the terminal. You should now see the terminal using the zsh like the one below.
Installing Powerline Fonts
The theme “agnoster” will require some special fonts to be render properly. Let’s install them now.
Type the following command to install the fonts into your system.
./install.sh
The output should be something like one below.
Let’s back up to the parent directory so we could do some cleaning up:
cd ..
You should the following output below indicating the home directory.
Let’s delete the installation folder with the following command:
rm -rf fonts
The fonts folder should be deleted now. Let’s clear our console output.
clear
You should see a clear window now on the console like the one below.
Installing Oh-My-ZSH
Oh-My-ZSH takes care of the configuration for our zsh shell. Let’s install it now.
Type the following into the terminal (do not use any line breaks, this should be only one line):
sh -c "$(curl -fsSL https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"
You should now see oh-my-zsh installed on your computer.
If you see a message that says “Insecure completion-dependent directories detected,” we need to set the ZSH_DISABLE_COMPFIX to true in the .zshrc file on the home directory.
To do this, open up a Finder window and navigate to home directory.
Press SHIFT + CMD + . to reveal hidden files. You should now see something similar below.
Open the .zshrc file using a text editor like Sublime.
This is what the inside of the .zshrc file looks like:
Scroll down around line #73.
Insert the following line right before source $ZSH/oh-my-zsh.sh:
ZSH_DISABLE_COMPFIX="true"
Save and close the .zshrc file, and open a new terminal window. You should something similar like the one below.
Save the installer on your “Downloads” folder like so:
Open a new Finder window and navigate to “Downloads.” You should see something similar below. Double click on the zip file and it should extract an instance of the iTerm app.
Double-click on “iTerm.app”
If prompted regarding the app being download from the Intermet, , click “Open.”
If prompted to move the app into the application folder, please click on “Move to Allocations Folder.”
Close all windows and press CMD + SPACE to pull up thre spotlight search service and type in “iterm.” Hit ENTER and you should now see the iTerm App.
Open a Finder window, navigate to the home directory, and find the .zshrc file.
Open the .zshrc file using a text editor.
Find ZSH_THEME=”robbyrussell” and replace “robbyrussell” with “agnoster” as shown below.
Save and close the file. Close any remaining open iTerm window by pressing CTRL + Q.
Restart iTerm by pressing CMD + SPACE and typing in “iterm” as shown in the images below.
Hit the ENTER key and a new iTerm window should open like the one below.
The prompt looks a little weird. Let’s fix it!
Go to iTerm2 and select Preferences… as shown below.
You’ll see something like the image below.
Click on “Profiles.”
Find the “+” on the lower left corner of the window below the Profile Name area besides “Tags >”
Click on the “+” sign.
On the General tab, under the Basics area, replace the default “New Profile” name with your preferred profile name. Below, I had typed in “Gunmetal Blue.”
In Title, click on the drop down and check or uncheck your preferences for the window title appearances.
Navigate to the Colors tab and click on the “Color Presets…” dropdown in the lower right hand corner of the window and selet “Smooooooth.”
Find “Background” in the Basic Colors section and set the color to R:0 G:50 B:150 as shown below.
Navigate to the “Text” tab and find the “Font” section. Select any of the Powerline fonts. Below, I selected Roboto Mono Medium for Powerline” and increase the font size to 13.
Under the same “Font” section, check “Use a different font for non-ASCII text” and select the same font as before. Refer to the image below.
Next, navigate to the “Window” tab and set the Transparency and Blur as show below.
Then, navigate to the “Terminal” tab and check “Unlimited scrollback.”
Finally, let’s set this newly created profile by as the default by clicking on “Other Actions…” dropdown and selecting “Set as Default” as shown below.
You should now see a star next to the newly created profile indicating that its status as the default profile for new windows.
Restart iTerm and you should something similar like the one below.
Notice that we can barely see the directory indicator on the prompt. Also, the username@hostname is a little long for liking. Let’s fix those.
Go to the iTerm preferences again and navigate to “Profiles” tab. Find “Blue” on the ANSI Colors under the “Normal” column and click on the colored box.
Set the RGB values as R:0 G:200 B:250 as shown below.
Quit iTerm by pressing CMD + Q and open a Finder window. Navigate to the home directory, reveal the hidden files with SHIFT + CMD + . and double click on the “.oh-my-zsh” folder.
Navigate to and click on the “themes” folder.
Look for the “agnoster.zsh-theme” file and open it using a text editor.
This is what the inside of the theme looks like:
Around line #92, look for the “%n@%m” character string.
Select “%n@%m” and replace it with whatver you’d like to display on the prompt.
Below, I simply replaced “%n@%m” with “Dd” for brevity.
Restart iTerm and you should get something similar like the image below.
If you navigate to a git repository, you’ll see something similar below:
I remember a brief conversation with my boss’ boss a while back. He said that he wouldn’t be impressed if somebody in the company built a face recognition tool from scratch because, and I quote, “Guess what? There’s an API for that.” He then goes on about the futility of doing something that’s already been done instead of just using it.
This gave me an insight into how an executive thinks. Not that they don’t care about the coolness factor of a project, but at the end of that day, they’re most concerned about how a project will add value to the business and even more importantly, how quickly it can be done.
In the real world, the time it takes to build prototype matters. And the quicker we get from data to insights, the better off we will be. These help us stay agile.
And this brings me to PyCaret.
PyCaret is an open source, low-code machine learning library in Python that allows you to go from preparing your data to deploying your model within seconds in your choice of notebook environment.[1]
Pycaret is basically a wrapper for some of the most popular machine learning libraries and frameworks scikit-learn and spaCy. Here are the things that PyCaret can do:
Classification
Regression
Clustering
Anomaly Detection
Natural Language Processing
Associate Rule Mining
If you’re interested in reading about the difference between traditional NLP approach vs. PyCaret’s NLP module, check out Prateek Baghel’s article.
Natural Language Processing
In just a few lines of code, PyCaret makes natural language processing so easy that it’s almost criminal. Like most of its other modules, PyCaret’s NLP module streamlined pipeline cuts the time from data to insights in more than half the time.
For example, with only one line, it performs text processing automatically, with the ability to customize stop words. Add another line or two, and you got yourself a language model. With yet another line, it gives you a properly formatted plotly graph. And finally, adding another line gives you the option to evaluate the model. You can even tune the model with, guess what, one line of code!
Instead of just telling you all about the wonderful features of PyCaret, maybe it’s be better if we do a little show and tell instead.
The Pipeline
For this post, we’ll create an NLP pipeline that involves the following 6 glorious steps:
Getting the Data
Setting up the Environment
Creating the Model
Assigning the Model
Plotting the Model
Evaluating the Model
We will be going through an end-to-end demonstration of this pipeline with a brief explanation of the functions involved and their parameters.
Let’s get started.
Housekeeping
Let us begin by installing PyCaret. If this is your first time installing it, just type the following into your terminal:
pip install pycaret
However, if you have a previously installed version of PyCaret, you can upgrade using the following command:
pip install —-upgrade pycaret
Beware: PyCaret is a big library so it’s going to take a few minutes to download and install.
We’ll also need to download the English language model because it is not included in the PyCaret installation.
Next, let’s fire up a Jupyter notebook and import PyCaret’s NLP module:
#import nlp module
from pycaret.nlp import *
Importing the pycaret.nlp automatically sets up your environment to perform NLP tasks only.
Getting the Data
Before setup, we need to decide first how we’re going to ingest data. There are two methods of getting the data into the pipeline. One is by using a Pandas dataframe and another is by using a simple list of textual data.
Passing a DataFrame
#import pandas if we're gonna use a dataframe
import pandas as pd
# load the data into a dataframe
df = pd.read_csv('hilaryclinton.csv')
Above, we’re simply loading the data into a dataframe.
Passing a List
# read a file containing a list of text data and assign it to 'lines'
with open('list.txt') as f:
lines = f.read().splitlines()
Above, we’re opening the file 'list.txt' and reading it. We assign the resulting list into the lines.
Sampling
From the rest of this experiment, we’ll just use a dataframe to pass textual data to thesetup() function of the NLP module. And for the sake of expediency, we’ll sample the dataframe to only select a thousand tweets.
# sampling the data to select only 1000 tweets
df = df.sample(1000, random_state=493).reset_index(drop=True)
Let’s take a quick look at our dataframe with df.head() and df.shape.
Setting Up the Environment
In the line below, we’ll initialize the setup by calling the setup() function and assign it to nlp.
With data and target, we’re telling PyCaret that we’d like to use the values in the 'text' column of df. Also, we’re setting the session_id to an arbitrary number of 493 so that we can reproduce the experiment over and over again and get the same result. Finally, we added custom_stopwords so that PyCaret will exclude the specified list of words in the analysis.
Note that if we want to use a list instead, we could replace df with lines and get rid of target = ‘text’ because a list has no columns for the PyCaret to target!
Here’s the output of nlp:
The output table above confirms our session id, number of documents (rows or records), and vocabulary size. It also shows up if we used custom stopwords or not.
Creating the Model
Below, we’ll create the model by calling the create_model() function and assign it to lda. The function already knows to use the dataset that we specified during setup(). In our case, the PyCaret knows we want to create a model based on the 'text' in df.
# create the model
lda = create_model('lda', num_topics = 6, multi_core = True)
In the line above, notice that w param used 'lda' as the parameter. LDA stands for Latent Dirichlet Allocation. We could’ve just as easily opted for other types of models.
Here’s the list of models that PyCaret currently supports:
‘lda’: Latent Dirichlet Allocation
‘lsi’: Latent Semantic Indexing
‘hdp’: Hierarchical Dirichlet Process
‘rp’: Random Projections
‘nmf’: Non-Negative Matrix Factorization
I encourage you to research the difference between the models above, To start, check out Lettier’s awesome guide on LDA.
The next parameter we used is num_topics = 6. This tells PyCaret to use six topics in the results ranging from 0 to 5. If num_topic is not set, the default number is 4. Lastly, we set multi_core to tell PyCaret to use all available CPUs for parallel processing. This saves a lot of computational time.
Assigning the Model
By calling assign_model(), we’re going to label our data so that we’ll get a dataframe (based on our original dataframe: df) with additional columns that include the following information:
Topic percent value for each topic
The dominant topic
The percent value of the dominant topic
# label the data using trained model
df_lda = assign_model(lda)
Let’s take a look at df_lda.
Plotting the Model
Calling the plot_model() function will give us some visualization about frequency, distribution, polarity, et cetera. The plot_model() function takes three parameters: model, plot, and topic_num. The model instructs PyCaret what model to use and must be preceded by a create_model() function. topic_num designates which topic number (from 0 to 5) will the visualization be based on.
PyCarets offers a variety of plots. The type of graph generated will depend on the plot parameter. Here is the list of currently available visualizations:
‘frequency’: Word Token Frequency (default)
‘distribution’: Word Distribution Plot
‘bigram’: Bigram Frequency Plot
‘trigram’: Trigram Frequency Plot
‘sentiment’: Sentiment Polarity Plot
‘pos’: Part of Speech Frequency
‘tsne’: t-SNE (3d) Dimension Plot
‘topic_model’ : Topic Model (pyLDAvis)
‘topic_distribution’ : Topic Infer Distribution
‘wordcloud’: Word cloud
‘umap’: UMAP Dimensionality Plot
Evaluating the Model
Evaluating the models involves calling the evaluate_model() function. It takes only one parameter: the model to be used. In our case, the model is stored is lda that was created with the create_model() function in an earlier step.
The function returns a visual user interface for plotting.
And voilà, we’re done!
Conclusion
Using PyCaret’s NLP module, we were able to quickly from getting the data to evaluating the model in just a few lines of code. We covered the functions involved in each step and examined the parameters of those functions.
Thank you for reading! PyCaret’s NLP module has a lot more features and I encourage you to read their documentation to further familiarize yourself with the module and maybe even the whole library!
In the next post, I’ll continue to explore PyCaret’s functionalities.
If you want to learn more about my journey from slacker to data scientist, check out the article here.