A friendly tutorial on getting zip codes and other geographic data from street addresses.
Knowing how to deal with geographic data is a must-have for a data scientist. In this post, we will play around with the MapQuest Search API to get zip codes from street addresses along with their corresponding latitude and longitude to boot!
In 2019, my friends and I participated in CivTechSA Datathon. At one point in the competition, we wanted to visualize the data points and overlay them on San Antonio’s map. The problem is, we had incomplete data. Surprise! All we had were a street number and a street name — no zip code, no latitude, nor longitude. We then turned to the great internet for some help.
We found a great API by MapQuest that will give us exactly what we needed. With just a sprinkle of Python code, we were able to accomplish our goal.
Today, we’re going to walk through this process.
To follow along, you can download the data from here. Just scroll down to the bottom tab on over to the Data Catalog 2019. Look for SAWS (San Antonio Water System) as shown below.
Download the file by clicking on the link to the Excel file.
If you’re using Windows 10, it will ask you to open Microsoft Store.
Go ahead and click on the “Install” button.
And let’s get started by clicking on the “Launch” button.
A Thousand Clicks
Click on “Get data” when the splash screen appears.
You will be presented with a lot of file format and sources; let’s choose “Text/CSV” and click on the “Connect” button.
Select “order_products_prior.csv” and click on the “Open” button.
The image below shows what the data looks like. Click on the “Load” button to load the dataset into Power BI Desktop.
Load the rest of the dataset by selecting “Get Data” and choosing the “Text/CSV” option on the dropdown.
You should have these three files loaded into Power BI Desktop:
You should see the following tables appear on the “Fields” panel of Power BI Desktop, as shown below. (Note: the image shows Power BI in Report View.)
Let’s see what the Data View looks like by clicking on the second icon on the left side of Power BI Desktop.
And now, let’s check out the Model View where we will see how the different tables are related to each other.
If we hover a line, it will turn yellow and the corresponding related fields are both highlighted as well.
In this case, Power BI Desktop is smart to infer the two relationships. However, most of the time, we will have to create the relationships ourselves. We will cover this topic in the future.
Let’s go back to the Report View and examine the “Visualizations” panel closely. Look for the “slicer” icon which looks like a square with a funnel at the bottom right corner. Click on it to add a visual to the report.
In the “Fields” panel, find the “department_id” and click the checkbox on its left.
This will cause the “department_id” field to appear under the “Visualizations” panel in the “Field” box.
Next, take your mouse cursor and hover over the top right corner of the visual in the Report View. Click on the three dots that appeared in the corner as shown below.
Click on “List” in the dropdown that appeared.
While the “department_id” visual is selected, you should see corner marks indicating the visual as the active visual. While the “department_id” is active, press CTRL+C to copy it and then CTRL+V to paste it. Move the new visual to the right of the original visual.
Make the second visual active by clicking somewhere inside it. Then look for the “aisle_id” field in the “Fields” panel on the right of Power BI Desktop as shown below.
Try selecting a value on the “department_id” visual and observe how the selection on “aisle_id” changes accordingly.
Now, examine the “Visualizations” panel again and click on the table visual as shown below.
In the “Fields” panel, select “product_id” and “product_name” or drag them in the “Values” box.
Power BI Desktop should look similar to the image below.
This time, try selecting a value from both “department_id” and “aisle_id” — observe what happens to the table visual on the right.
Let’s create another visual by copying and pasting the table visual. This time, select (or drag) the following fields to the “Values” box of the visual.
Power BI Desktop should now look similar to the image below.
Try clicking one of the selections in the table visual (where it’s showing “product_id” and “product_name”) and observe how the table on the right changes accordingly.
For a closer look, activate Focus Mode by clicking on the icon as shown below.
The table displays the details of orders that have the product that you selected in the table with “product_id” and “product_name.”
Get out of Focus Mode by clicking on “Back to report” as shown below.
Let’s rename this page or tab by right-clicking on the page name (“Page 1”) and selecting “Rename Page.”
Type in “PRODUCTS” and press ENTER.
let’s add another page or tab to the report by right-clicking on the page name again (“PRODUCTS”) and selecting “Duplicate Page.”
Rename the new page “TRANSACTIONS” and delete (or remove) the right-most table with order details on it.
Change the top-left visual and make update the fields as shown below. The “Fields” box should say “order_dow” while the top-left visual is activated.
Move the visuals around so it looks similar below.
Do the same thing for the next visual. This time, select “order_hour_of_day” and your Power BI Desktop should like the image below.
Do the same thing one last time for the last table and it should now contain fields as shown below.
Let’s add another page or tab to the report by clicking on the “+” icon at the bottom of the report’s main work area.
In the “Visualizations” panel, select “Stacked column chart.”
Resize the chart by drabbing their move-handles.
Make sure the “Axis” box contains “order_dow” and the “Values” box with “order_id” respectively. Power BI Desktop should automatically calculate the count for “order_id” and display the field as “Count of order_id” as shown below.
The graph above is interesting because it shows a higher number of orders for Day 0 and Day 1.
Let’s make another chart.
We will follow the same procedure of adding a chart and for this time, we’ll use “order_hour_of_day” in the “Axis” box as shown below.
The graph shows the peak time for the number of orders.
One last graph!
We will add another chart with “days_since_prior_order” in the “Axis” box.
This last graph is the most interesting because the number of reorders peaks during these three time periods: 7 days, 14 days, and 30 days since prior order. This means that people are in a habit of resupplying every week, every two weeks, and every month.
That’s it, folks!
In the next article, we will “prettify” our charts and make them more readable to the others.
The procedures above have been drawn out. But if you’re a novice Power BI user, don’t despair! With regular practice, the concepts demonstrated in this article will soon become second nature and you’ll probably be able to do them in your sleep.
Let’s load the data into Power BI. Open up Power BI and click on “Get data” on the welcome screen as shown below.
Next, you’ll be presented with another pane that asks what type of data we want to get. Select “Text/CSV” as shown below and click on “Connect.”
When the File Open window appears, navigate to where we saved the dataset and click on the “Open” button on the lower right-hand corner.
When a preview appears, just click on “Load.”
We’ll now see the main working area of Power BI. Head over to the “Visualizations” panel and look for “Line Chart.”
This is what the line chart icon looks like:
Next, a visual placeholder will appear. Drag the hot corner marking on the lower right-hand corner of the placeholder and drag it diagonally down and to the right corner of the main working area.
Next, head over the “Fields” panel.
With the line chart placeholder still selected, find the “Date” field and click on the square box to put a checkmark on it.
We’ll now see the “Date” field under Axis. Click on the down arrow on the right of the “Date” as shown below.
Select “Date” instead of the default, “Date Hierarchy.”
Then, let’s put a checkmark on the “Births” field.
We’ll now see a line graph like the one below. Head over the Visualization panel and under the list of icons, find the analytics icon as shown below.
Scroll down the panel and find the “Forecast” section. Click on the down arrow to expand it if necessary.
Next, click on “+Add” to add forecasting on the current visualization.
We’ll now see a solid gray fill area and a line plot to the right of the visualization like the one below.
Let’s change the Forecast length to 31 points. In this case, a data point equals a day so 31 would roughly equate to a month’s worth of predictions. Click on “Apply” on the lower right-corner of the Forecast group to apply the changes.
Instead of points, let’s change the unit of measure to “Months” instead as shown below.
Once we click “Apply,” we’ll see the changes in the visualization. The graph below contains forecast for 3 months.
What if we wanted to compare how the forecast compares to actual data? We can do this with the “Ignore last” setting.
For this example, let’s ignore the last 3 months of the data. Power Bi will then forecast 3 months worth of data using the dataset but ignoring the last 3 months. This way, we can compare the Power BI’s forecasting result with the actual data in the last 3 months of the dataset.
Let’s click on “Apply” when we’redone changing the settings as shown below.
Below, we can see how the Power BI forecasting compares with the actual data. The black solid line represents the forecasting while the blue line represents the actual data.
The solid gray fill on the forecasting represents the confidence interval. The higher its value, the large the area will be. Let’s lower our confidence interval to 75% as shown below and see how it affects the graph.
The solid gray fill became smaller as shown below.
Next, let’s take into account seasonality. Below, let’s set it to 90 points which is equivalent to about 3 months. Putting this value will tell Power BI to look for seasonality within a 3-month cycle. Play with this value with what makes sense according to the data.
The result is show below.
Let’s return our confidence interval to the default value of 95% and scroll down the group to see formatting options.
Let’s change the forecasting line to an orange color and let’s make the gray fill disappear by changing formatting to “None.”
And that’s it! With a few simple clicks of the mouse, we got ourselves a forecast from the dataset.
Let’s read the data into a dataframe. If you want to follow along, you can download the dataset here. This dataset contains Trump’s tweets from the moment he took office on January 20, 2017 to May 30, 2020.
import pandas as pd
from pycaret.nlp import *
df = pd.read_csv('trump_20200530.csv')
Let’s check the shape of our data first:
And let’s take a quick look:
For expediency, let’s sample only 1,000 tweets.
# sampling the data to select only 1000 tweets df = df.sample(1000, random_state=493).reset_index(drop=True) df.shape
PyCaret’s setup() function performs the following text-processing steps:
Removing Numeric Characters
Removing Special Characters
And all in one line of code!
It takes in two parameters: the dataframe in data and the name of the text column that we want to pass in target. In our case, we also used the optional parameters session_id for reproducibility and custom_stopwords to reduce the noise coming from the tweets.
After all is said and done, we’ll get something similar to this:
In the next step, we’ll create the model and we’ll use ‘lda’:
Thank you for reading! Exploratory data analysis uses a lot of techniques and we’ve only explored a few on this post. I encourage you to keep practicing and employ other techniques to derive insights from data.
Opinionated advice for the rest of us. Love of math, optional.
Since my article about my journey to data science, I’ve had a lot of people ask me for advice regarding their own journey towards becoming a data scientist. A common theme started to emerge: aspiring data scientists are confused about how to start, and some are drowning because of the overwhelming amount of information available in the wild. So, what’s another, right?
Well, let’s see.
I urge aspiring data scientists to slow it down a bit and take a step back. Before we get to learning, let’s take care of some business first: the fine art of reinventing yourself. Reinventing yourself takes time, so we better get started early on in the game.
In this post, I will share a very opinionated approach to do-it-yourself rebranding as a data scientist. I will assume three things about you:
You’re broke, but you’ve got grit.
You’re willing to sacrifice and learn.
You’ve made a conscious decision to become a data scientist.
Let’s get started!
First Things First
I’m a strong believer in Yoda’s wisdom: “Do or do not, there is no try.” For me, either you do something or you don’t. Failure for me was not an option, and I took comfort in knowing that I won’t really fail unless I quit entirely. So first bit of advice: don’t quit. Ever.
Begin with the End in Mind
Let’s get our online affairs in order and start thinking about SEO. SEO stands for search engine optimization. The simplest way to think about is the very fine art of putting as much “stuff” as you can on the internet with your real professional name out there so that when somebody searches for you, all they will find are the stuff that you want them to find.
In our case, we want the words “data science” or “data scientist” to appear whenever your name appears in the search results.
So let’s start littering the interweb!
Create a professional Gmail account if you don’t already have one. Don’t make your username be firstname.lastname@example.org. Play it safe, the more boring, the better. Start with email@example.com, or if your name is a common one, append it with “data” like firstname.lastname@example.org. Avoid numbers at all costs. If you have one already, but it doesn’t follow the aforementioned guidelines, create another one!
Create a LinkedIn account and use your professional email address. Put “Data Scientist in Training” in the headline. “Data Science Enthusiast” is too weak. We’ve made a conscious decision and committed to the mission, remember? While we’re at it, let’s put the app on our phone too.
If you don’t have a Facebook account yet, create one just so you could claim your name. If you already have one, put that thing on private pronto! Go the extra mile and also delete the app on your phone so you won’t get distracted. Do the same for other social networks like Twitter, Instagram, and Pinterest. Set them to private for now, we’ll worry about cleaning them up later.
Create a Twitter account if you don’t already have one. We can take a little bit of leeway in the username. Make it short and memorable but still professional, so you don’t offend anybody’s sensibilities. If you already have one, decide if you want to keep it or start all over. The main thing to ask yourself: is there any content in your history that can be construed as unprofessional or mildly controversial? Err on the side of caution.
Start following the top voices in data science on LinkedIn and Twitter. Here are a few suggestions: Cassie Kozyrkov, Angela Baltes, Sarah N., Kate Strachnyi, Kristen Kehrer, Favio Vazquez, and of course, my all-time favorite: Eric Weber.
Create a Hootsuite account and connect your LinkedIn and Twitter accounts. Start scheduling data science-related posts. You can share interesting articles from other people about data science or post about your own data science adventures! If you do share other people’s posts, please make sure you give the appropriate credit. Simply adding a URL is lazy and no bueno. Thanks to Eric Weber for this pro-tip!
Take a professional picture and put it as your profile picture in all of your social media accounts. Aim for a neutral background, if possible. Make sure it’s only you in the picture unless you’re Eric (he’s earned his chops so don’t question him! LOL.)
Create a Github account if you don’t have one already. You’re going to need this as you start doing data science projects.
BONUS: if you can spare a few dollars, go to wordpress.org and get yourself a domain that has your professional name on it. I was fortunate enough to have an uncommon name, so I have ednalyn.com, but if your name is common, be creative and make one up that’s recognizably yours. Maybe something like janesmithdoesdatascience.com. Then you can start planning on having your resumé online or maybe even have a blog post or two about data science. As for me, I started with writing my experience when I first started to learn data science.
Clean-up: when time permits, start auditing your social media posts for offensive, scandalous, or unflattering content. If you’re looking to save time, try a service like brandyourself.com. Warning! It can get expensive, so watch where you click.
Do Your Chores
No kidding! When you’re doing household chores, taking a walk, or maybe even while driving, listen to podcasts that talk about data science topics like Linear Digression and TwiML. Don’t get too bogged down about committing what they say to memory. Just go along with the flow, and sooner or later, the terminology and concepts that they discuss will start to sound familiar. Just remember not to get too caught up with the discussions that you start burning whatever you’re cooking or miss your exit like I have many times in the past.
Meat and Potatoes
Now that we’ve taken care of the preliminaries of living and breathing data science, it’s time to take care of the meat and potatoes: actually learning about data science.
There’s no shortage of opinions about how to learn data science. There are so many of them that it can overwhelm you, especially when they start talking about learning the foundational math and statistics first.
While important, I don’t see the point of studying theory first when I may soon fall asleep or worst, get too intimidated by the onslaught of mathematical formulas that I get so exasperated, and ended up quitting!
What I humbly propose, rather, is to employ the idea of “minimum viable knowledge” or MVK as described by Ken Jee. in his article: How I Would Learn Data Science (If I Had to Start Over). Ken Jee describes minimum viable knowledge as learning “just enough to be able to learn through doing.”² I suggest checking it out:
My approach to MVK is pretty straight-forward: learn just enough SQL to be able to get the data from a database, learn enough Python so that you could have program control and be able to use the pandas library, and then do end-to-end projects, from simple ones to increasingly more challenging ones. Along the way, you’d learn about data wrangling, exploratory data analysis, and modeling. Other techniques like cross-validation and grid search would surely be a part of your journey as well. The trick is never to get too comfortable and always push yourself slowly.
To the list-oriented, here is my process:
Learn enough SQL and Python to be able to do end-to-end projects with increasing complexity.
For each project, go through the steps of the data science pipeline: planning, acquisition, preparation, exploration, modeling, delivery (story-telling/presentation). Be sure to document your efforts on your Github account.
For each iteration, I suggest doing an end-to-end project that practices each of these following data science methodologies:
natural language processing
And for each methodology, practice its different algorithms, models, or techniques. For example, for natural language processing, you might want to practice these following techniques:
Just Push It
As you do end-to-end projects, it’s a good practice to push your work publicly on Github. Not only will it track your progress, but it also backups your work in case your local machine breaks down. Not to mention, it’s a great way to showcase your progress. Note that I said progress, not perfection. Generally, people understand if our Github repositories are a little bit messy. In fact, most expect it. At a minimum, just make sure that you have a great README.md file for each repo.
What to put on a Github Repo README.md:
What goal or purpose of the project
Background on the project
How to use the project (if somebody wants to try it for themselves)
Mention your keywords: “data science,” “data scientist,” “machine learning,” et cetera.
Don’t ignore this note: don’t make the big mistake or hard-coding your credentials or any passwords in your public code. Put them in an .env file and .gitignore them. For reference, check out this documentation from Github.
And finally, as you get better with employing different techniques and you begin to do hyper-parameter tuning, I believe at this point that you’re ready to face the necessary evil that is math. And more than likely, the more you understand and develop intuition, the less you’ll hate it. And maybe, just maybe, you’ll even grow to love it.
I have one general recommendation when it comes to learning the math behind data science: take it slow. Be gentle on yourself and don’t set deadlines. Again, there’s no sense in being ambitious and tackling something monumental if it ends up driving you insane. There’s just no fun in it.
There are generally two approaches to learning math.
One is to take the structured approach, which starts on learning the basics first and then incrementally take on the more challenging parts. For this I recommend KhanAcademy. Personalize your learning towards calculus, linear algebra, and statistics. Take small steps and celebrate small wins.
The other approach is slightly geared for more hands-on involvement and takes a little bit of reverse engineering. I call it learning backward. You start with finding out what math concept is involved in a project and breaking down that concept into more basic ideas and go from there. This approach is better suited for those who prefer to learn by doing.
A good example of learning by doing is illustrated by a post on Analytics Vidhya.
Well, learning math sure is hard! It’s so powerful and intense that you’d better take a break often or risk overheating your brain. On the other hand, taking a break does not necessarily mean taking a day off. After all, there is no rest for the weary! Every once in a while, I strongly recommend supplementing your technical studies with a little bit of understanding of the business side of things. For this, I suggest the classic book: Thinking with Data by Max Shron. You can also find a lot of articles here on Medium.
Taking a break can be lonely sometimes, and being alone with only your thoughts can be exhausting. So you may decide to finally talk with your family, the problem is, you’re so motivated and gung-ho about data science that it’s all you can talk about. Sooner or later, you’re going to annoy your loved ones.
It happened to me.
This is why I decided to talk to other people with similar interests. I went on Meetups and started networking with people who are either already practicing data science or people like you who are aspiring to be a data scientist as well. In this post-COVID (hopefully) age that we’re in, having group video calls are more prevalent. This is actually more beneficial because now, geography won’t be an issue anymore.
A good resource to start is LinkedIn. You can use the social network to find others with similar interests or even find local data scientists who can still spare an hour or two every month to mentor motivated learners. Start with companies in your local municipality. Find out if they have a data scientist that works there, and if you do find one, kindly send them a personalized message with a request to connect. Give them the option to refuse gracefully and just ask them to repoint or recommend you to another person who does have the time to mentor.
The worst that can happen is they said no. No hard feelings, eh?
Thanks for reading! This concludes my very opinionated advice on rebranding yourself as a data scientist. I hope you got something out of it. I welcome any feedback. If you have something you’d like to add, please post it in the comments or responses.
Let’s continue this discussion!
If you’d like to connect with me, you can reach me on Twitter or LinkedIn. I love to connect, and I do my best to respond to inquiries as they come.
Stay tuned, and see you in the next post!
If you want to learn more about my journey from slacker to data scientist, check out this article.
 Quote Investigator. (June 10, 2020). Tell Me and I Forget; Teach Me and I May Remember; Involve Me and I Learn. https://quoteinvestigator.com/2019/02/27/tell/
 Towards Data Science. (June 11, 2020). How I Would Learn Data Science (If I Had to Start Over). https://towardsdatascience.com/how-i-would-learn-data-science-if-i-had-to-start-over-f3bf0d27ca87
This article was first published in Towards Data Science’ Medium publication.
I remember a brief conversation with my boss’ boss a while back. He said that he wouldn’t be impressed if somebody in the company built a face recognition tool from scratch because, and I quote, “Guess what? There’s an API for that.” He then goes on about the futility of doing something that’s already been done instead of just using it.
This gave me an insight into how an executive thinks. Not that they don’t care about the coolness factor of a project, but at the end of that day, they’re most concerned about how a project will add value to the business and even more importantly, how quickly it can be done.
In the real world, the time it takes to build prototype matters. And the quicker we get from data to insights, the better off we will be. These help us stay agile.
And this brings me to PyCaret.
PyCaret is an open source, low-code machine learning library in Python that allows you to go from preparing your data to deploying your model within seconds in your choice of notebook environment.
Pycaret is basically a wrapper for some of the most popular machine learning libraries and frameworks scikit-learn and spaCy. Here are the things that PyCaret can do:
Natural Language Processing
Associate Rule Mining
If you’re interested in reading about the difference between traditional NLP approach vs. PyCaret’s NLP module, check out Prateek Baghel’s article.
Natural Language Processing
In just a few lines of code, PyCaret makes natural language processing so easy that it’s almost criminal. Like most of its other modules, PyCaret’s NLP module streamlined pipeline cuts the time from data to insights in more than half the time.
For example, with only one line, it performs text processing automatically, with the ability to customize stop words. Add another line or two, and you got yourself a language model. With yet another line, it gives you a properly formatted plotly graph. And finally, adding another line gives you the option to evaluate the model. You can even tune the model with, guess what, one line of code!
Instead of just telling you all about the wonderful features of PyCaret, maybe it’s be better if we do a little show and tell instead.
For this post, we’ll create an NLP pipeline that involves the following 6 glorious steps:
Getting the Data
Setting up the Environment
Creating the Model
Assigning the Model
Plotting the Model
Evaluating the Model
We will be going through an end-to-end demonstration of this pipeline with a brief explanation of the functions involved and their parameters.
Let’s get started.
Let us begin by installing PyCaret. If this is your first time installing it, just type the following into your terminal:
pip install pycaret
However, if you have a previously installed version of PyCaret, you can upgrade using the following command:
pip install —-upgrade pycaret
Beware: PyCaret is a big library so it’s going to take a few minutes to download and install.
We’ll also need to download the English language model because it is not included in the PyCaret installation.
Next, let’s fire up a Jupyter notebook and import PyCaret’s NLP module:
#import nlp module
from pycaret.nlp import *
Importing the pycaret.nlp automatically sets up your environment to perform NLP tasks only.
Getting the Data
Before setup, we need to decide first how we’re going to ingest data. There are two methods of getting the data into the pipeline. One is by using a Pandas dataframe and another is by using a simple list of textual data.
Passing a DataFrame
#import pandas if we're gonna use a dataframe
import pandas as pd
# load the data into a dataframe
df = pd.read_csv('hilaryclinton.csv')
Above, we’re simply loading the data into a dataframe.
Passing a List
# read a file containing a list of text data and assign it to 'lines'
with open('list.txt') as f:
lines = f.read().splitlines()
Above, we’re opening the file 'list.txt' and reading it. We assign the resulting list into the lines.
From the rest of this experiment, we’ll just use a dataframe to pass textual data to thesetup() function of the NLP module. And for the sake of expediency, we’ll sample the dataframe to only select a thousand tweets.
# sampling the data to select only 1000 tweets
df = df.sample(1000, random_state=493).reset_index(drop=True)
Let’s take a quick look at our dataframe with df.head() and df.shape.
Setting Up the Environment
In the line below, we’ll initialize the setup by calling the setup() function and assign it to nlp.
With data and target, we’re telling PyCaret that we’d like to use the values in the 'text' column of df. Also, we’re setting the session_id to an arbitrary number of 493 so that we can reproduce the experiment over and over again and get the same result. Finally, we added custom_stopwords so that PyCaret will exclude the specified list of words in the analysis.
Note that if we want to use a list instead, we could replace df with lines and get rid of target = ‘text’ because a list has no columns for the PyCaret to target!
Here’s the output of nlp:
The output table above confirms our session id, number of documents (rows or records), and vocabulary size. It also shows up if we used custom stopwords or not.
Creating the Model
Below, we’ll create the model by calling the create_model() function and assign it to lda. The function already knows to use the dataset that we specified during setup(). In our case, the PyCaret knows we want to create a model based on the 'text' in df.
# create the model
lda = create_model('lda', num_topics = 6, multi_core = True)
In the line above, notice that w param used 'lda' as the parameter. LDA stands for Latent Dirichlet Allocation. We could’ve just as easily opted for other types of models.
Here’s the list of models that PyCaret currently supports:
‘lda’: Latent Dirichlet Allocation
‘lsi’: Latent Semantic Indexing
‘hdp’: Hierarchical Dirichlet Process
‘rp’: Random Projections
‘nmf’: Non-Negative Matrix Factorization
I encourage you to research the difference between the models above, To start, check out Lettier’s awesome guide on LDA.
The next parameter we used is num_topics = 6. This tells PyCaret to use six topics in the results ranging from 0 to 5. If num_topic is not set, the default number is 4. Lastly, we set multi_core to tell PyCaret to use all available CPUs for parallel processing. This saves a lot of computational time.
Assigning the Model
By calling assign_model(), we’re going to label our data so that we’ll get a dataframe (based on our original dataframe: df) with additional columns that include the following information:
Topic percent value for each topic
The dominant topic
The percent value of the dominant topic
# label the data using trained model
df_lda = assign_model(lda)
Let’s take a look at df_lda.
Plotting the Model
Calling the plot_model() function will give us some visualization about frequency, distribution, polarity, et cetera. The plot_model() function takes three parameters: model, plot, and topic_num. The model instructs PyCaret what model to use and must be preceded by a create_model() function. topic_num designates which topic number (from 0 to 5) will the visualization be based on.
PyCarets offers a variety of plots. The type of graph generated will depend on the plot parameter. Here is the list of currently available visualizations:
‘frequency’: Word Token Frequency (default)
‘distribution’: Word Distribution Plot
‘bigram’: Bigram Frequency Plot
‘trigram’: Trigram Frequency Plot
‘sentiment’: Sentiment Polarity Plot
‘pos’: Part of Speech Frequency
‘tsne’: t-SNE (3d) Dimension Plot
‘topic_model’ : Topic Model (pyLDAvis)
‘topic_distribution’ : Topic Infer Distribution
‘wordcloud’: Word cloud
‘umap’: UMAP Dimensionality Plot
Evaluating the Model
Evaluating the models involves calling the evaluate_model() function. It takes only one parameter: the model to be used. In our case, the model is stored is lda that was created with the create_model() function in an earlier step.
The function returns a visual user interface for plotting.
And voilà, we’re done!
Using PyCaret’s NLP module, we were able to quickly from getting the data to evaluating the model in just a few lines of code. We covered the functions involved in each step and examined the parameters of those functions.
Thank you for reading! PyCaret’s NLP module has a lot more features and I encourage you to read their documentation to further familiarize yourself with the module and maybe even the whole library!
In the next post, I’ll continue to explore PyCaret’s functionalities.
If you want to learn more about my journey from slacker to data scientist, check out the article here.
I have a recurring dream where my instructor from a coding boot camp would constantly beat my head with a ruler telling me to read a package or library’s documentation. Hence, as a past time, I would find myself digging into Python or Panda’s documentation.
Today, I found myself wandering into pandas’ .drop() function. So, in this post, I shall attempt to make sense of panda’s documentation for the ever famous .drop().
Let’s import pandas and create a sample dataframe.
If we type df into a cell in Jupyter notebook, this will give us the whole dataframe:
One-level DataFrame Operations
Now let’s get rid of some columns.
df.drop(['color', 'score'], axis=1)
The code above simply tells Python to get rid of the 'color' and 'score' in axis=1 which means look in the columns. Alternatively, we could’ve just as easily not used the named parameter axis because it’s confusing. So, let’s try that now:
Both of the methods above will result in the following:
Next, we’ll get rid of some rows (or records).
df.drop([1, 2, 4, 6])
Above, we’re simply telling Python to get rid of the rows with the index of 1, 2, 4, and 6. Note that the indices are passed as a list [1, 2, 4, 6]. This will result in the following:
MultiIndex DataFrame Operations
In this next round, we’re going to work with a multi-index dataframe. Let’s set it up:
Next, let’s get rid of 'pork rinds' because I don’t like them:
df.drop(index='pork rinds', level=0)
And this is what we get:
And finally, let’s cut the fat:
Above, level=1 simply means the second level (since the first level starts with 0). In this case, it’s the carbs, fat, and protein levels. By specifying index='fat', we’re telling Python to get rid of the fat in level=1.
Here’s what we get:
So far, with all the playing that we did, somehow, if we type df into a cell, the output that we’re going to get is the original dataframe without modifications. this is because all the changes that we’ve been making take effect only on the display.
But what if we want to make the changes permanent? Enter: inplace.
df.drop(index='fat', level=1, inplace=True)
Above, we added inplace=True in the parameter. This signals Python that we want the changes to be taken in place so that when we output df, this is what we’ll get: