Topic Modeling on PyCaret

I remember a brief conversation with my boss’ boss a while back. He said that he wouldn’t be impressed if somebody in the company built a face recognition tool from scratch because, and I quote, “Guess what? There’s an API for that.” He then goes on about the futility of doing something that’s already been done instead of just using it.

This gave me an insight into how an executive thinks. Not that they don’t care about the coolness factor of a project, but at the end of that day, they’re most concerned about how a project will add value to the business and even more importantly, how quickly it can be done.

In the real world, the time it takes to build prototype matters. And the quicker we get from data to insights, the better off we will be. These help us stay agile.

And this brings me to PyCaret.


PyCaret is an open source, low-code machine learning library in Python that allows you to go from preparing your data to deploying your model within seconds in your choice of notebook environment.[1]

Pycaret is basically a wrapper for some of the most popular machine learning libraries and frameworks scikit-learn and spaCy. Here are the things that PyCaret can do:

  • Classification
  • Regression
  • Clustering
  • Anomaly Detection
  • Natural Language Processing
  • Associate Rule Mining

If you’re interested in reading about the difference between traditional NLP approach vs. PyCaret’s NLP module, check out Prateek Baghel’s article.

Natural Language Processing

In just a few lines of code, PyCaret makes natural language processing so easy that it’s almost criminal. Like most of its other modules, PyCaret’s NLP module streamlined pipeline cuts the time from data to insights in more than half the time.

For example, with only one line, it performs text processing automatically, with the ability to customize stop words. Add another line or two, and you got yourself a language model. With yet another line, it gives you a properly formatted plotly graph. And finally, adding another line gives you the option to evaluate the model. You can even tune the model with, guess what, one line of code!

Instead of just telling you all about the wonderful features of PyCaret, maybe it’s be better if we do a little show and tell instead.


The Pipeline

For this post, we’ll create an NLP pipeline that involves the following 6 glorious steps:

  1. Getting the Data
  2. Setting up the Environment
  3. Creating the Model
  4. Assigning the Model
  5. Plotting the Model
  6. Evaluating the Model

We will be going through an end-to-end demonstration of this pipeline with a brief explanation of the functions involved and their parameters.

Let’s get started.


Housekeeping

Let us begin by installing PyCaret. If this is your first time installing it, just type the following into your terminal:

pip install pycaret

However, if you have a previously installed version of PyCaret, you can upgrade using the following command:

pip install —-upgrade pycaret

Beware: PyCaret is a big library so it’s going to take a few minutes to download and install.

We’ll also need to download the English language model because it is not included in the PyCaret installation.

python -m spacy download en_core_web_sm
python -m textblob.download_corpora

Next, let’s fire up a Jupyter notebook and import PyCaret’s NLP module:

#import nlp module
from pycaret.nlp import *

Importing the pycaret.nlp automatically sets up your environment to perform NLP tasks only.

Getting the Data

Before setup, we need to decide first how we’re going to ingest data. There are two methods of getting the data into the pipeline. One is by using a Pandas dataframe and another is by using a simple list of textual data.

Passing a DataFrame

#import pandas if we're gonna use a dataframe
import pandas as pd

# load the data into a dataframe
df = pd.read_csv('hilaryclinton.csv')

Above, we’re simply loading the data into a dataframe.

Passing a List

# read a file containing a list of text data and assign it to 'lines'
with open('list.txt') as f:
    lines = f.read().splitlines()

Above, we’re opening the file 'list.txt' and reading it. We assign the resulting list into the lines.

Sampling

From the rest of this experiment, we’ll just use a dataframe to pass textual data to thesetup() function of the NLP module. And for the sake of expediency, we’ll sample the dataframe to only select a thousand tweets.

# sampling the data to select only 1000 tweets
df = df.sample(1000, random_state=493).reset_index(drop=True)

Let’s take a quick look at our dataframe with df.head() and df.shape.

Setting Up the Environment

In the line below, we’ll initialize the setup by calling the setup() function and assign it to nlp.

# initialize the setup
nlp = setup(data = df, target = 'text', session_id = 493, custom_stopwords = [ 'rt', 'https', 'http', 'co', 'amp'])

With data and target, we’re telling PyCaret that we’d like to use the values in the 'text' column of df. Also, we’re setting the session_id to an arbitrary number of 493 so that we can reproduce the experiment over and over again and get the same result. Finally, we added custom_stopwords so that PyCaret will exclude the specified list of words in the analysis.

Note that if we want to use a list instead, we could replace df with lines and get rid of target = ‘text’ because a list has no columns for the PyCaret to target!

Here’s the output of nlp:

The output table above confirms our session id, number of documents (rows or records), and vocabulary size. It also shows up if we used custom stopwords or not.

Creating the Model

Below, we’ll create the model by calling the create_model() function and assign it to lda. The function already knows to use the dataset that we specified during setup(). In our case, the PyCaret knows we want to create a model based on the 'text' in df.

# create the model
lda = create_model('lda', num_topics = 6, multi_core = True)

In the line above, notice that w param used 'lda' as the parameter. LDA stands for Latent Dirichlet Allocation. We could’ve just as easily opted for other types of models.

Here’s the list of models that PyCaret currently supports:

  • ‘lda’: Latent Dirichlet Allocation
  • ‘lsi’: Latent Semantic Indexing
  • ‘hdp’: Hierarchical Dirichlet Process
  • ‘rp’: Random Projections
  • ‘nmf’: Non-Negative Matrix Factorization

I encourage you to research the difference between the models above, To start, check out Lettier’s awesome guide on LDA.

The next parameter we used is num_topics = 6. This tells PyCaret to use six topics in the results ranging from 0 to 5. If num_topic is not set, the default number is 4. Lastly, we set multi_core to tell PyCaret to use all available CPUs for parallel processing. This saves a lot of computational time.

Assigning the Model

By calling assign_model(), we’re going to label our data so that we’ll get a dataframe (based on our original dataframe: df) with additional columns that include the following information:

  • Topic percent value for each topic
  • The dominant topic
  • The percent value of the dominant topic
# label the data using trained model
df_lda = assign_model(lda)

Let’s take a look at df_lda.

Plotting the Model

Calling the plot_model() function will give us some visualization about frequency, distribution, polarity, et cetera. The plot_model() function takes three parameters: model, plot, and topic_num. The model instructs PyCaret what model to use and must be preceded by a create_model() function. topic_num designates which topic number (from 0 to 5) will the visualization be based on.

plot_model(lda, plot='topic_distribution')
plot_model(lda, plot='topic_model')
plot_model(lda, plot='wordcloud', topic_num = 'Topic 5')
plot_model(lda, plot='frequency', topic_num = 'Topic 5')
plot_model(lda, plot='bigram', topic_num = 'Topic 5')
plot_model(lda, plot='trigram', topic_num = 'Topic 5')
plot_model(lda, plot='distribution', topic_num = 'Topic 5')
plot_model(lda, plot='sentiment', topic_num = 'Topic 5')
plot_model(lda, plot='tsne')

PyCarets offers a variety of plots. The type of graph generated will depend on the plot parameter. Here is the list of currently available visualizations:

  • ‘frequency’: Word Token Frequency (default)
  • ‘distribution’: Word Distribution Plot
  • ‘bigram’: Bigram Frequency Plot
  • ‘trigram’: Trigram Frequency Plot
  • ‘sentiment’: Sentiment Polarity Plot
  • ‘pos’: Part of Speech Frequency
  • ‘tsne’: t-SNE (3d) Dimension Plot
  • ‘topic_model’ : Topic Model (pyLDAvis)
  • ‘topic_distribution’ : Topic Infer Distribution
  • ‘wordcloud’: Word cloud
  • ‘umap’: UMAP Dimensionality Plot

Evaluating the Model

Evaluating the models involves calling the evaluate_model() function. It takes only one parameter: the model to be used. In our case, the model is stored is lda that was created with the create_model() function in an earlier step.

The function returns a visual user interface for plotting.

And voilà, we’re done!

Conclusion

Using PyCaret’s NLP module, we were able to quickly from getting the data to evaluating the model in just a few lines of code. We covered the functions involved in each step and examined the parameters of those functions.


Thank you for reading! PyCaret’s NLP module has a lot more features and I encourage you to read their documentation to further familiarize yourself with the module and maybe even the whole library!

In the next post, I’ll continue to explore PyCaret’s functionalities.

If you want to learn more about my journey from slacker to data scientist, check out the article here.

Stay tuned!

You can reach me on Twitter or LinkedIn.


[1] PyCaret. (June 4, 2020). Why PyCaret. https://pycaret.org/Towards Data

Drop It Like It’s Hot

I have a recurring dream where my instructor from a coding boot camp would constantly beat my head with a ruler telling me to read a package or library’s documentation. Hence, as a past time, I would find myself digging into Python or Panda’s documentation.

Today, I found myself wandering into pandas’ .drop() function. So, in this post, I shall attempt to make sense of panda’s documentation for the ever famous .drop().


Housekeeping

Let’s import pandas and create a sample dataframe.

import pandas as pd

data = {'fname': ['Priyanka', 'Jane', 'Sarah', 'Jake', 'Tatum', 'Shubham', 'Antonio'],
        'color': ['Red', 'Orange', 'Yellow', 'Green', 'Blue', 'Indigo', 'Violet'],
        'value': [0, 1, 2, 3, 5, 8, 13],
        'score': [345, 778, 124, 554, 864, 908, 456]
       }

df = pd.DataFrame(data)

If we type df into a cell in Jupyter notebook, this will give us the whole dataframe:

One-level DataFrame Operations

Now let’s get rid of some columns.

df.drop(['color', 'score'], axis=1)

The code above simply tells Python to get rid of the 'color' and 'score' in axis=1 which means look in the columns. Alternatively, we could’ve just as easily not used the named parameter axis because it’s confusing. So, let’s try that now:

df.drop(columns=['color', 'score'])

Both of the methods above will result in the following:

Next, we’ll get rid of some rows (or records).

df.drop([1, 2, 4, 6])

Above, we’re simply telling Python to get rid of the rows with the index of 1, 2, 4, and 6. Note that the indices are passed as a list [1, 2, 4, 6]. This will result in the following:

MultiIndex DataFrame Operations

In this next round, we’re going to work with a multi-index dataframe. Let’s set it up:

data = pd.MultiIndex(levels=[['slim jim', 'avocado', 'banana', 'pork rinds'],
                             ['carbs', 'fat', 'protein']],
                     codes=[[0, 0, 0, 1, 1, 1, 2, 2, 2, 3, 3, 3],
                            [0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2]])

df = pd.DataFrame(index=data, columns=['thinghy', 'majig'],
                  data=[[45, 30], [200, 100], [1.5, 1], [30, 20],
                        [250, 150], [1.5, 0.8], [320, 250],
                        [1, 0.8], [0.3, 0.2], [34.2, 56], [33, 45.1], [67.3, 98]])

This is how the multi-index dataframe looks like:

Now, let’s get rid of the 'thinghy' column with:

df.drop(columns='thinghy')

And this is what we get:

Next, let’s get rid of 'pork rinds' because I don’t like them:

df.drop(index='pork rinds', level=0)

And this is what we get:

And finally, let’s cut the fat:

df.drop(index='fat', level=1)

Above, level=1 simply means the second level (since the first level starts with 0). In this case, it’s the carbs, fat, and protein levels. By specifying index='fat', we’re telling Python to get rid of the fat in level=1.

Here’s what we get:

Staying Put

So far, with all the playing that we did, somehow, if we type df into a cell, the output that we’re going to get is the original dataframe without modifications. this is because all the changes that we’ve been making take effect only on the display.

But what if we want to make the changes permanent? Enter: inplace.

df.drop(index='fat', level=1, inplace=True)

Above, we added inplace=True in the parameter. This signals Python that we want the changes to be taken in place so that when we output df, this is what we’ll get:

We had permanently cut the fat off. LOL!


Thank you for reading! That’s it for today.

Stay tuned!

You can reach me on Twitter or LinkedIn.

Selecting Rows with .loc

As data scientists, we spent most of our time wrangling knee-deep in manipulating data using Pandas. In this post, we’ll be looking at the .loc property of Pandas to select rows based on some predefined conditions.

Let’s open up a Jupyter notebook, and let’s get wrangling!


The Data

We will be using the 311 Service Calls dataset¹ from the City of San Antonio Open Data website to illustrate how the different .loc techniques work.

Housekeeping

Before we get started, let’s do a little housekeeping first.

import pandas as pd

# to print out all the outputs
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"

# set display options
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
pd.set_option('display.max_colwidth', -1)

Nothing fancy going on here. We’re just importing the mandatory Pandas library and setting the display options so that when we inspect our dataframe, the columns and rows won’t be truncated by Jupyter. We’re setting it up so that every output within a single cell is displayed and not just the last one.

def show_missing(df):
    """
    Return the total missing values and the percentage of
    missing values by column.
    """
    null_count = df.isnull().sum()
    null_percentage = (null_count / df.shape[0]) * 100
    empty_count = pd.Series(((df == ' ') | (df == '')).sum())
    empty_percentage = (empty_count / df.shape[0]) * 100
    nan_count = pd.Series(((df == 'nan') | (df == 'NaN')).sum())
    nan_percentage = (nan_count / df.shape[0]) * 100
    return pd.DataFrame({'num_missing': null_count, 'missing_percentage': null_percentage,
                         'num_empty': empty_count, 'empty_percentage': empty_percentage,
                         'nan_count': nan_count, 'nan_percentage': nan_percentage})

In the code above, we’re defining a function that will show us the number of missing or null values and their percentage.

Getting the Data

Let’s load the data into a dataframe.

Doing a quick df.head() we’ll see the first five rows of the data:

And df.info() will let us see the dtypes of the columns.

Then, show_missing(df) shows us if there are any missing values in the data.

Selecting rows where the column is null or not.

Let’s select rows where the 'Dept' column has null values and also filtering a dataframe where null values are excluded.

df['Dept'].value_counts(dropna=False)

df_null = df.loc[df['Dept'].isnull()]
df_null.head()
df_null.shape

df_notnull = df.loc[df['Dept'].notnull()]
df_notnull.head()
df_notnull.shape

First, we did a value count of the column ‘Dept’ column. The method .value_counts() returns a panda series listing all the values of the designated column and their frequency. By default, the method ignores NaN values and will not list it. However, if you include the parameter dropna=False it will include any NaN values in the result.

Next, the line df_null = df.loc[df['Dept'].isnull()] tells the computer to select rows in df where the column 'Dept' is null. The resulting dataframe is assigned to df_null , and all its rows will NaN as values in the ‘Dept’ column.

Similarly, the line df_notnull = df.loc[df['Dept'].notnull()] tells the computer to select rows in df where the column 'Dept' is not null. The resulting dataframe is assigned to df_notnull , and all its rows will not have any NaN as values in the ‘Dept’ column.

The general syntax for these two techniques are:

df_new = df_old.loc[df_old['Column Name'].isnull()]
df_new = df_old.loc[df_old['Column Name'].notnull()]

Selecting rows where the column is a specific value.

The 'Late (Yes/No)' column looks interesting. Let’s take a look at it!

df['Late (Yes/No)'].value_counts(dropna=False)

df_late = df.loc[df['Late (Yes/No)'] == 'YES']
df_late.head()
df_late.shape

df_notlate = df.loc[df['Late (Yes/No)'] == 'NO']
df_notlate.head()
df_notlate.shape

Again, we did a quick value count on the 'Late (Yes/No)' column. Then, we filtered for the cases that were late with df_late = df.loc[df['Late (Yes/No)'] == 'YES']. Similarly, we did the opposite by changing 'YES' to 'NO' and assign it to a different dataframe df_notlate.

The syntax is not much different from the previous example except the addition of == sign between the column and the value we want to compare. It basically asks, for every row, if the value on a particular column (left side) matches the value that we specified (right-side). If the match is True, it includes that row in the result. If the match is False, it ignores it.

Here’s the resulting dataframe for df_late:

And here’s the one for df_notlate:

The general syntax for this technique is:

df_new = df_old.loc[df_old['Column Name'] == 'some_value' ]

Selecting rows where the column is not a specific value.

We’ve learned how to select rows based on ‘yes’ and ‘no.’ But what if the values are not binary? For example, let’s look at the ‘Category’ column:

One hundred ninety-two thousand one hundred ninety-seven rows or records do not have a category assigned, but instead of NaN, empty, or null value, we get 'No Category' as the category itself. What if we want to filter these out? Enter: the != operator.

df.Category.value_counts(dropna=False)

df_categorized = df.loc[df['Category'] != 'No Category']
df_categorized.head()
df_categorized.shape

df_categorized.Category.value_counts(dropna=False)

As usual, we did customary value counts on the 'Category' column to see what we’re working with. Then, we created the df_categorized dataframe to include any records in the the df dataframe that don’t have 'No Category' as their value in the 'Category' column.

Here’s the result of doing a value count on the 'Category' column of the df_categorized dataframe:

As the screenshot above shows, the value counts retained everything but the ‘No Category.’

The general syntax for this technique is:

df_new = df_old.loc[df_old['Column Name'] != 'some_value' ]

Select rows based on multiple conditions.

Let’s consider the following columns, 'Late (Yes/No)' and 'CaseStatus':

What if we wanted to know which open cases right now are already passed their SLA (service level agreement)? We would need to use multiple conditions to filter the cases or rows in a new dataframe. Enter the & operator.

df_late_open = df.loc[(df['Late (Yes/No)'] == 'YES') & (df['CaseStatus'] == 'Open')]

df_late_open.head()
df_late_open.shape

The syntax is similar to the previous ones except for the introduction of the & operator in between parenthesis. In the line df_late_open = df.loc[(df[‘Late (Yes/No)’] == ‘YES’) & (df[‘CaseStatus’] == ‘Open’)], there are two conditions:

  1. (df[‘Late (Yes/No)’] == ‘YES’)
  2. (df[‘CaseStatus’] == ‘Open’)

We want both of these to be true to match a row, so we included the operator & in between them. In plain speak, the & bitwise operator simply means AND. Other bitwise operators include pipe| sign for OR and the tilde ~ for NOT. I encourage you to experiment using these bitwise operators to get a good feel of what all they can do. Just remember to enclose each condition between parenthesis so that you don’t confuse Python.

The general syntax for this technique is:

df_new = df_old.loc[(df_old['Column Name 1'] == 'some_value_1') & (df['Column Name 2'] == 'some_value_2')]

Select rows having a column value that belongs in some list of values.

Let’s look at the value count for the 'Council District' column:

What if we wanted to focus on districts #2, #3, #4, and #5 because they’re in south San Antonio, and they’re known for getting poor service from the city? (I’m so totally making this up by the way!) In this case, we could use the .isin() method like so:

df['Council District'].value_counts(dropna=False)

df_south = df.loc[df['Council District'].isin([2,3,4,5])]
df_south.head()
df_south.shape

df_south['Council District'].value_counts()

Remember to pass your choices inside the .isin() method as a list like ['choice1', 'choice2', 'choice3'] because otherwise, it will cause an error. For integers like in our example, it is not necessary to include quotation marks because quotation marks are for string values only.

Here’s the result of our new dataframe df_south:

The general syntax for this technique is:

df_new = df_old.loc[df_old[Column Name'].isin(['choice1', 'choice2', 'choice3'])]

Conclusion

And that’s it! In this post, we loaded the 311 service calls data into a dataframe and created subsets of data using the .loc method.


Thanks for reading! I hope you enjoyed today’s post. Data wrangling, at least for me, is a fun exercise because this is the phase where I first get to know the data and it gives me a chance to hone my problem-solving skills when faced with really messy data. Happy wrangling folks!

Stay tuned!

You can reach me on Twitter or LinkedIn.

[1] City of San Antonio Open Data. (May 31, 2020). 311 Service Calls. https://data.sanantonio.gov/dataset/service-calls

From Slacker to Data Scientist

My journey into data science without a degree.


Butterflies in my belly; my stomach is tied up in knots. I know I’m taking a risk by sharing my story, but I wanted to reach out to others aspiring to be a data scientist. I am writing this with hopes that my story will encourage and motivate you. At the very least, hopefully, your journey won’t be as long as mine.

So, full speed ahead.


I don’t have a PhD. Heck, I don’t even have any degree to speak of. Still, I am very fortunate enough to work as a data scientist in a ridiculously good company.

How I did it? Hint: I had a lot of help.

Never Let Schooling Interfere With Your Education — Grant Allen

Formative Years

It was 1995 and I had just gotten my very first computer. It was a 1982 Apple IIe. It didn’t come with any software but it came with a manual. That’s how I learned my very first computer language: Apple BASIC.

My love for programming was born.

In Algebra class, I remember learning about the quadratic equation. I had a cheap graphic calculator then, a Casio, that’s about half the price of a TI-82. It came with a manual too so I decided to write a program that will solve the quadratic equation for me without much hassle.

My love for solving problems was born.

In my senior year, my parents didn’t know anything about financial aid but I was determined to go to college so I decided to join the Navy so that I could use MGIB pay for my college. After all, four years of service didn’t seem that long.

My love for adventure was born.

Later in my career in the Navy, I was promoted as the ship’s financial manager. I was in charge of managing multiple budgets. The experience taught me bookkeeping.

My love for numbers was born.

After the Navy, I ended volunteering for a non-profit. They eventually recruited me to start a domestic violence crisis program from scratch. I had no social work experience but I agreed anyway.

My love for saying “Why not?” was born.

Rock Bottom

After a few successful years, my boss retired and the new boss fired me. I was devastated. I fell into a deep state of clinical depression and I felt worthless.

I recall crying very loudly in the kitchen table. It has been more than a year since my non-profit job and I’m nowhere near close as having a prospect for the next one. I was in a very dark space.

Thankfully, the crying fit was a cathartic experience. It gave me a jolt to do some introspection, stop whining, and come up with a plan.

“Choose a Job You Love, and You Will Never Have To Work a Day in Your Life. “ — Anonymous

Falling in Love, All Over Again

To pay the bills, I’ve been working as a freelance web designer/developer but I wasn’t happy. Frankly, the business of doing web design bored me. It was frustrating working with clients who think and act like they’re the expert on design.

So I started thinking, “what’s next?”.

Searching the web, I’ve stumbled upon the latest news in artificial intelligence. It led me to machine learning which in turn led me to the subject of data science.

I was infatuated.

I signed up for Andrew Ng’s machine learning course on Coursera. I listened to TwitML, Linear Digression, and a few other podcasts. I revisited Python and got reacquainted with git on Github.

I was in love.

It was at this time that I made the conscious decision to be a data scientist.

Leap of Faith

Learning something new was fun for me. But still, I had that voice in my head telling me that no matter how much I study and learn, I will never get a job because I don’t have a degree.

So, I took a hard look at the mirror and acknowledge that I need help. The question now is where to start looking.

Then one day out of the blue, my girlfriend asked me what data science is. I jumped off my feet and starting explaining right away. Once I stopped explaining to catch a breath, I managed to ask her why she asked. And that’s when she told me that she’d seen a sign on the billboard. We went for a drive and saw the sign for myself. It was a curious billboard with two big words “data science” and a smaller one that says “Codeup.” I went to their website and researched their employment outcome.

I was sold.

Preparation

Before the start of the class, we were given a list of materials to go over.

Given that I had only about two months to prepare, I was not expected to finish the courses. I was basically told to just skim over the content. Well, I did them anyway. I spent day and night going over the courses and materials. Did the tests, got the certificates!

Bootcamp

Boot camp was a blur. We had a saying in the Navy about the boot camp experience: “the days drag on but the weeks fly by.” This was definitely true for the Codeup boot camp as well.

Codeup is described as a “fully-immersive, project-based 18-week Data Science career accelerator that provides students with 600+hours of expert instruction in applied data science. Students develop expertise across the full data science pipeline (planning, acquisition, preparation, exploration, modeling, delivery), and become comfortable working with real, messy data to deliver actionable insights to diverse stakeholders.”¹

We were coding in Python, querying the SQL database, and making dashboards in Tableau. We did projects after projects. We learned about different methodologies like regression, classification, clustering, time-series, anomaly detection, natural language processing, and distributed machine learning.

More importantly, the experience taught us the following:

  1. Real data is messy; deal with it.
  2. If you can’t communicate with your stakeholders, you’re useless.
  3. Document your code.
  4. Read the documentation.
  5. Always be learning.

Job Hunting

Our job hunting process started from day one of boot camp. We updated our LinkedIn profile and made sure that we’re pushing to Github almost every day. I even spruced up my personal website to include the projects we’ve done during class. And of course, we made sure that our resumé is in good shape.

Codeup helped me with all of these.

In addition, Codeup also helped prepare us for both technical and behavioral interviews. We practiced answering questions following the S.T.A.R. format (Situation, Task, Action, Result). We optimized our answers to highlight our strengths as high-potential candidates.

Post-Graduation

My education continued even after graduation. In between filling out applications, I would code every day and try out different Python libraries. I regularly read the news for the latest development in machine learning. While doing chores, I listen to a podcast, a TedTalk, or a LinkedIn learning video. When bored, I listened to or read books.

There’s a lot of good technical books out there to read. But for the non-technical ones, I recommend the following:

  • Thinking with Data by Max Shron
  • Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy by Cathy O’Neill
  • Invisible Women: Data Bias in a World Designed for Men by Caroline Criado Perez
  • Rookie Smarts: Why Learning Beats Knowing in the New Game of Work by Liz Wiseman
  • Grit: The Power of Passion and Perseverance by Angela Duckworth
  • The First 90 Days: Proven Strategies for Getting Up to Speed Faster and Smarter by Michael Watkins

Dealing with Rejection

I’ve had a lot of rejections. The first one was the hardest but after that, it kept getting easier. I developed a thick skin and just moved on.

Rejection sucks. Try not to take it personally. Nobody likes to fail, but it will happen. When it does, fail up.

Conclusion

It took me 3 months after graduating from boot camp to get a job. It took a lot of sacrifices. When I finally got the job offer, I felt very grateful, relieved, and excited.

I could not have done it without Codeup and my family’s support.


Thanks for reading! I hope you got something out of this post.

To all aspiring data scientists out there, just don’t give up. Try not to listen to all the haters out there. If you must, hear what they have to say, take stock of your weaknesses, and aspire to learn better than yesterday. But never ever let them discourage you. Remember, data science skills lie on a spectrum. If you’ve got the passion and perseverance, I’m pretty sure that there’s a company or organization out there that’s just the right fit for you.

Stay tuned!

You can reach me on Twitter or LinkedIn.

[1] Codeup Alumni Portal. (May 31, 2020). Resumé — Ednalyn C. De Dioshttps://alumni.codeup.com/uploads/699-1562875657.pdf

This article was first published in Towards Data Science‘ Medium publication.

Populating a Network Graph with Named-Entities

An early attempt of using networkx to visualize the results of natural language processing.


I do a lot of natural language processing and usually, the results are pretty boring to the eye. When I learned about network graphs, it got me thinking, why not use keywords as nodes and connect them together to create a network graph?

Yupp, why not!

In this post, we’ll do exactly that. We’re going to extract named-entities from news articles about coronavirus and then use their relationships to connect them together in a network graph.


A Brief Introduction

Network graphs are a cool visual that contains nodes (vertices) and edges (lines). It’s often used in social network analysis and network analysis but data scientists also use it for natural language processing.

Photo by Anders Sandberg on Flicker

Natural Language Processing or NLP is a branch of artificial intelligence that deals with programming computers to process and analyze large volumes of text and derive meaning out of them.¹ In other words, it’s all about teaching computers how to understand human language… like a boss!

Photo by brewbooks on Flickr

Enough introduction, let’s get to coding!


To get started, let’s make sure to take care of all dependencies. Open up a terminal and execute the following commands:

pip install -U spacy
python -m spacy download en
pip install networkx
pip install fuzzywuzzy

This will install spaCy and download the trained model for English. The third command installs networkx. This should work for most systems. If it doesn’t work for you, check out the documentation for spaCy and networkx. Also, we’re using fuzzywuzzy for some text preprocessing.

With that out of the way, let’s fire up a Jupyter notebook and get started!


Imports

Run the following code block into a cell to get all the necessary imports into our Python environment.

import pandas as pd
import numpy as np
import pickle
from operator import itemgetter
from fuzzywuzzy import process, fuzz# for natural language processing
import spacy
import en_core_web_sm# for visualizations
%matplotlib inline
from matplotlib.pyplot import figureimport networkx as nx

Getting the Data

If you want to follow along, you can download the sample dataset here. The file was created using newspaper to import news articles from the npr.org. If you’re feeling adventurous, use the code snippet below to build your own dataset.

import requests
import json
import time
import newspaper
import pickle

npr = newspaper.build('https://www.npr.org/sections/coronavirus-live-updates')

corpus = []
count = 0
for article in npr.articles:
    time.sleep(1)
    article.download()
    article.parse()
    text = article.text
    corpus.append(text)
    if count % 10 == 0 and count != 0:
        print('Obtained {} articles'.format(count))
    count += 1

corpus300 = corpus[:300]

with open("npr_coronavirus.txt", "wb") as fp:   # Pickling
    pickle.dump(corpus300, fp)

# with open("npr_coronavirus.txt", "rb") as fp:   # Unpickling
#     corpus = pickle.load(fp)

Let’s get our data.

with open('npr_coronavirus.txt', 'rb') as fp:   # Unpickling
corpus = pickle.load(fp)

Extract Entities

Next, we’ll start by loading spaCy’s English model:

nlp = en_core_web_sm.load()

Then, we’ll extract the entities:

entities = []for article in corpus[:50]:
tokens = nlp(''.join(article))
gpe_list = []
for ent in tokens.ents:
if ent.label_ == 'GPE':
gpe_list.append(ent.text)
entities.append(gpe_list)

In the above code block, we created an empty list called entities to store a list of lists that contains the extracted entities from each of the articles. In the for-loop, we looped through the first 50 articles of the corpus. For each iteration, we converted each articles into tokens (words) and then we looped through all those words to get the entities that are labeled as GPE for countries, states, and cities. We used ent.text to extract the actual entity and appended them one by one to entities.

Here’s the result:

Note that North Carolina has several variations of its name and some have “the” prefixed in their names. Let’s get rid of them.

articles = []for entity_list in entities:
cleaned_entity_list = []
for entity in entity_list:
cleaned_entity_list.append(entity.lstrip('the ').replace("'s", "").replace("’s",""))
articles.append(cleaned_entity_list)

In the code block above, we’re simply traversing the list of lists articles and cleaning the entities one by one. With each iteration, we’re stripping the prefix “the” and getting rid of 's.

Optional: FuzzyWuzzy

Looking at the entities, I’ve noticed that there are also variations in the “United States” is represented. There exists “United States of America” while some are just “United States”. We can trim these down into a more standard naming convention.

FuzzyWuzzy can help with this.

Described by pypi.org as “string matching like a boss,” FiuzzyWuzzy uses Levenshtein distance to calculate the similarities between words.¹ For a really good tutorial on how to use FuzzyWuzzy, check out Thanh Huynh’s article.FuzzyWuzzy: Find Similar Strings within one column in PythonToken Sort Ratio vs. Token Set Ratiotowardsdatascience.com

Here’s the optional code for using FuzzyWuzzy:

choices = set([item for sublist in articles for item in sublist])

cleaned_articles = []
for article in articles:
    article_entities = []
    for entity in set(article):
        article_entities.append(process.extractOne(entity, choices)[0])
    cleaned_articles.append(article_entities)

For the final step before creating the network graph, let’s get rid of the empty lists within our list of list that were generated by articles who didn’t have any GPE entity types.

articles = [article for article in articles if article != []]

Create the Network Graph

For the next step, we’ll create the world into which the graph will exist.

G = nx.Graph()

Then, we’ll manually add the nodes with G.add_nodes_from().

for entities in articles:
G.add_nodes_from(entities)

Let’s see what the graph looks like with:

figure(figsize=(10, 8))
nx.draw(G, node_size=15)

Next, let’s add the edges that will connect the nodes.

for entities in articles:
if len(entities) > 1:
for i in range(len(entities)-1):
G.add_edges_from([(str(entities[i]),str(entities[i+1]))])

For each iteration of the code above, we used a conditional that will only entertain a list of entities that has two or more entities. Then, we manually connect each of the entities with G.add_edges_from().

Let’s see what the graph looks like now:

figure(figsize=(10, 8))
nx.draw(G, node_size=10)

This graph reminds me of spiders! LOL.

To organize it a bit, I decided to use the shell version of the network graph:

figure(figsize=(10, 8))
nx.draw_shell(G, node_size=15)

We can tell that some nodes are heavier on connections than others. To see which nodes have the most connections, let’s use G.degree().

G.degree()

This gives the following degree view:

Let’s find out which node or entity has the most number of connections.

max(dict(G.degree()).items(), key = lambda x : x[1])

To find out which other nodes have the most number of connections, let’s check the top 5:

degree_dict = dict(G.degree(G.nodes()))
nx.set_node_attributes(G, degree_dict, 'degree')sorted_degree = sorted(degree_dict.items(), key=itemgetter(1), reverse=True)

Above, sorted_degrees is a list that contains all the nodes and their degree values. We only wanted the top 5 like so:

print("Top 5 nodes by degree:")
for d in sorted_degree[:5]:
print(d)

Bonus Round: Gephi

Gephi is an open-source and free desktop application that lets us visualize, explore, and analyze all kinds of graphs and networks.²

Let’s export our graph data into a file so we can import it into Gephi.

nx.write_gexf(G, "npr_coronavirus_GPE_50.gexf")

Cool beans!

Next Steps

This time, we only processed 50 articles from npr.org. What would happen if we processed all 300 articles from our dataset? What will we see if we change the entity type from GPE to PERSON? How else can we use network graphs to visualize natural language processing results?

There’s always more to do. The possibilities are endless!


I hope you enjoyed today’s post. The code is not perfect and we have a long way to go towards realizing insights from the data. I encourage you to dive deeper and learn more about spaCynetworkxfuzzywuzzy, and even Gephi.

Stay tuned!

You can reach me on Twitter or LinkedIn.

[1]: Wikipedia. (May 25, 2020). Natural language processing https://en.wikipedia.org/wiki/Natural_language_processing

[2]: Gephi. (May 25, 2020). The Open Graph Viz Platform https://gephi.org/

This article was first published in Towards Data Science‘ Medium publication.

From DataFrame to Named-Entities

A quick-start guide to extracting named-entities from a Pandas dataframe using spaCy.


A long time ago in a galaxy far away, I was analyzing comments left by customers and I noticed that they seemed to mention specific companies much more than others. This gave me an idea. Maybe there is a way to extract the names of companies from the comments and I could quantify them and conduct further analysis.

There is! Enter: named-entity-recognition.

Named-Entity Recognition

According to Wikipedia, named-entity recognition or NER “is a subtask of information extraction that seeks to locate and classify named entity mentioned in unstructured text into pre-defined categories such as person names, organizations, locations, medical codes, time expressions, quantities, monetary values, percentages, etc.”¹ In other words, NER attempts to extract words that categorized into proper names and even numerical entities.

In this post, I’ll share the code that will let us extract named-entities from a Pandas dataframe using spaCy, an open-source library provides industrial-strength natural language processing in Python and is designed for production use.²

To get started, let’s install spaCy with the following pip command:

pip install -U spacy

After that, let’s download the pre-trained model for English:

python -m spacy download en

With that out of the way, let’s open up a Jupyter notebook and get started!

Imports

Run the following code block into a cell to get all the necessary imports into our Python environment.

# for manipulating dataframes
import pandas as pd# for natural language processing: named entity recognition
import spacy
from collections import Counter
import en_core_web_sm
nlp = en_core_web_sm.load()# for visualizations
%matplotlib inline

The important line in this block is nlp = en_core_web_sm.load() because this is what we’ll be using later to extract the entities from the text.

Getting the Data

First, let’s get our data and load it into a dataframe. If you want to follow along, download the sample dataset here or create your own from the Trump Twitter Archive.

df = pd.read_csv('ever_trump.csv')

Running df.head() in a cell will get us acquainted with the data set quickly.

Getting the Tokens

Second, let’s create tokens that will serve as input for spaCy. In the line below, we create a variable tokens that contains all the words in the 'text' column of the df dataframe.

tokens = nlp(''.join(str(df.text.tolist())))

Third, we’re going to extract entities. We can just extract the most common entities for now:

items = [x.text for x in tokens.ents]
Counter(items).most_common(20)
Screenshot by Author

Extracting Named-Entities

Next, we’ll extract the entities based on their categories. We have a few to choose from people to events and even organizations. For a complete list of all that spaCy has to offer, check out their documentation on named-entities.

Screenshot by Author

To start, we’ll extract people (real and fictional) using the PERSON type.

person_list = []for ent in tokens.ents:
if ent.label_ == 'PERSON':
person_list.append(ent.text)

person_counts = Counter(person_list).most_common(20)df_person = pd.DataFrame(person_counts, columns =['text', 'count'])

In the code above, we started by making an empty list with person_list = [].

Then, we utilized a for-loop to loop through the entities found in tokens with tokens.ents. After that, we made a conditional that will append to the previously created list if the entity label is equal to PERSON type.

We’ll want to know how many times a certain entity of PERSON type appears in the tokens so we did with person_counts = Counter(person_list).most_common(20). This line will give us the top 20 most common entities for this type.

Finally, we created the df_person dataframe to store the results and this is what we get:

Screenshot by Author

We’ll repeat the same pattern for the NORP type which recognizes nationalities, religious and political groups.

norp_list = []for ent in tokens.ents:
if ent.label_ == 'NORP':
norp_list.append(ent.text)

norp_counts = Counter(norp_list).most_common(20)df_norp = pd.DataFrame(norp_counts, columns =['text', 'count'])

And this is what we get:

Screenshot by Author

Bonus Round: Visualization

Let’s create a horizontal bar graph of the df_norp dataframe.

df_norp.plot.barh(x='text', y='count', title="Nationalities, Religious, and Political Groups", figsize=(10,8)).invert_yaxis()
Screenshot by Author

Voilà, that’s it!


I hope you enjoyed this one. Natural language processing is a huge topic but I hope that this gentle introduction will encourage you to explore more and expand your repertoire.

Stay tuned!

You can reach me on Twitter or LinkedIn.

[1]: Wikipedia. (May 22, 2020). Named-entity recognition https://en.wikipedia.org/wiki/Named-entity_recognition

[2]: spaCy. (May 22, 2020). Industrial-Strength Natural Language Processing in Python https://spacy.io/

This article was first published in Towards Data Science‘ Medium publication.

Create an N-Gram Ranking in Power BI

A quick start guide on building a Python visual with a few simple clicks of the mouse and a dash of code.


In a previous article, I wrote a quick start guide on creating and visualizing n-gram ranking using nltk for natural language processing. However, I needed a way to share my findings with others who don’t have Python or Jupyter Notebook installed in their machines. I needed to use our organization’s BI reporting tool: Power BI.

Enter Python Visual.

The Python visual allows you to create a visualization generated by running Python code. In this post, we’ll walk through the steps needed to visualize the results of our n-gram ranking using this visual.

First, let’s get our data. You can download the sample dataset here. Then, we could load the data into Power BI Desktop as shown below:

Select Text/CSV and click on “Connect”.

Select the file in the Windows Explorer folder and click open:

Click on “Load”.

Next, find the Py icon on the “Visualizations” panel.

Then, click on “Enable” at the prompt that appears to enable script visuals.

You’ll see a placeholder appear in the main area and a Python script editor panel at the bottom of the dashboard.

Select the ‘text’ column on the “Fields” panel.

You’ll see a predefined script that serves as a preamble for the script that we’re going to write.

In the Python script editor panel, place your cursor at the end of line #6 and hit enter twice.

Then, copy and paste the following code:

import re
import unicodedata
import nltk
from nltk.corpus import stopwordsADDITIONAL_STOPWORDS = ['covfefe']import matplotlib.pyplot as pltdef basic_clean(text):
wnl = nltk.stem.WordNetLemmatizer()
stopwords = nltk.corpus.stopwords.words('english') + ADDITIONAL_STOPWORDS
text = (unicodedata.normalize('NFKD', text)
.encode('ascii', 'ignore')
.decode('utf-8', 'ignore')
.lower())
words = re.sub(r'[^\w\s]', '', text).split()
return [wnl.lemmatize(word) for word in words if word not in stopwords]words = basic_clean(''.join(str(dataset['text'].tolist())))bigrams_series = (pandas.Series(nltk.ngrams(words, 2)).value_counts())[:12]bigrams_series.sort_values().plot.barh(color='blue', width=.9, figsize=(12, 8))plt.show()

In a nutshell, the code above transforms extracts n-grams from the 'text' column of thedataset dataframe and creates a horizontal bar graph out of it using matplotlib. The result of plt.show() is what Power BI displays on the Python visual.

For more information on this code, please visit my previous tutorial.From DataFrame to N-GramsA quick-start guide to creating and visualizing n-gram ranking using nltk for natural language processing.towardsdatascience.com

After you’re done pasting the code, click on the “play” icon at the upper right corner of the Python script editor panel.

After a few moments, you should now be able to see the horizontal bar graph like the one below:

And that’s it!

With a few simple clicks of the mouse, along with some help from our Python script, we’re able to visualize the results of our n-gram ranking.


I hope you enjoyed today’s post on one of Power BI’s strongest features. Power BI already has some useful and beautiful built-in visuals but sometimes, you just need a little bit more flexibility. Running Python code helps with this. I hope this gentle introduction will encourage you to explore more and expand your repertoire.

In the next article, I’ll share a quick-start guide to extracting named-entities from a Pandas dataframe using spaCy.

Stay tuned!

You can reach me on Twitter or LinkedIn.

This article was first published in Towards Data Science‘ Medium publication.