What are people saying about amusement parks? A Twitter sentiment analysis using Python.

One of the quintessential tasks of open data is sentiment analysis. A very common example of this is using tweets from Twitter’s streaming API. In this article I’m going to show you how to capture Twitter data live, make sense of it and do some basic plots based on the NLTK sentiment analysis library.

What is sentiment analysis?

minnieThe result of sentiment analysis is as it sounds – it returns an estimation of whether a piece of text is generally happy, neutral, or sad. The magic behind this is a Python library known as NLTK – the Natural Language Toolkit. The smart people that wrote this package took what is known about Natural Language Processing in the literature and have packaged it for dummies like me to use. In short, it has a database of commonly used positive and negative words that it checks against and does a basic vote count – positives are 1 and negatives are -1, with the final result being positive or negative. You can get really smart about how exactly you build the database, but in this article I’m just going to stick with the stock library that it comes with.

Asking politely for your data

Twitter is really open with their data, and it’s worth being nice in return. That means telling them who you are before you start crawling through their servers. Thankfully, they’ve made this really easy as well.tweety

Surf over to the Twitter Apps site, sign in (or create an account if you need to, you luddite) and click on the ‘Create new app’ button. Don’t freak out – I know you’re not an app developer! We just need to do this to create an API key. Now click on the app you just created, then on the ‘Keys and Access Tokens’ tab. You’ll see four strings of letters – Your consumer key, consumer secret, access key ad access secret. Copy and paste these and store them somewhere only you can get to – off line on your local drive. If you make these public (by publishing them on github for example) you’ll have to disable them immediately and get new ones. Don’t underestimate how much a hacker with your key can completely screw you and Twitter and everyone on it – with you taking all the blame.

Now the serious, scary stuff is over we can get to streaming some data! The first thing we’ll need to do is create a file that captures the tweets we’re interested in – in our case anything mentioning Disney, Universal or Efteling. I expect that there’ll be a lot more for Disney and Universal given they have multiple parks globally, but I’m kind of interested to see how the Efteling tweets do just smashing them into the NLTK work flow.

Here’s the Python code you’ll need to start streaming your tweets:

# I adapted all this stuff from http://adilmoujahid.com/posts/2014/07/twitter-analytics/ - check out Adil's blog if you get a chance!

#Import the necessary methods from tweepy library
import re
from tweepy.streaming import StreamListener
from tweepy import OAuthHandler
from tweepy import Stream

#Variables that contains the user credentials to access Twitter API
access_token = "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
access_token_secret = "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
consumer_key = "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
consumer_secret = "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"


#This is a basic listener that just prints received tweets to stdout.
class StdOutListener(StreamListener):

    def on_data(self, data):
        print data
        return True

    def on_error(self, status):
        print status



if __name__ == '__main__':

    #This handles Twitter authentification and the connection to Twitter Streaming API
    l = StdOutListener()
    auth = OAuthHandler(consumer_key, consumer_secret)
    auth.set_access_token(access_token, access_token_secret)
    stream = Stream(auth, l)
    #This line filter Twitter Streams to capture data by the keywords commonly used in amusement park tweets.
    stream.filter(track= [ "#Disneyland", "#universalstudios", "#universalstudiosFlorida", "#UniversalStudiosFlorida", "#universalstudioslorida", "#magickingdom", "#Epcot","#EPCOT","#epcot", "#animalkingdom", "#AnimalKingdom", "#disneyworld", "#DisneyWorld", "Disney's Hollywood Studios", "#Efteling", "#efteling", "De Efteling", "Universal Studios Japan", "#WDW", "#dubaiparksandresorts", "#harrypotterworld", "#disneyland", "#UniversalStudios", "#waltdisneyworld", "#disneylandparis", "#tokyodisneyland", "#themepark"])

If you’d prefer, you can download this from my Github repo instead here. To be able to use it you’ll need to install the tweepy package using:

pip install tweepy

The only  other thing you have to do is enter the strings you got from Twitter in your previous step and you’ll have it running. To save this to a file, you can use the terminal (cmd in windows) by running:

python theme_park_tweets.py > twitter_themeparks.txt

For a decent body of text to analyse I ran this for about 24 hours. You’ll see how much I got back for that time and can make your own judgment. When you’re done hit Ctrl-C to kill the script, then open up the file and see what you’ve got.

Yaaaay! Garble!

So you’re probably pretty excited by now – we’ve streamed data live and captured it! You’ve probably been dreaming for the last 24 hours about all the cool stuff you’re going to do with it. Then you get this:

{"created_at":"Sun May 07 17:01:41 +0000 2017","id":861264785677189
120,"id_str":"861264785677189120","text":"RT @CCC_DisneyUni: I have
n't been to #PixieHollow in awhile! Hello, #TinkerBell! #Disney #Di
sneylandResort #DLR #Disneyland\u2026 ","source":"\u003ca href=\"ht
tps:\/\/disneyduder.com\" rel=\"nofollow\"\u003eDisneyDuder\u003c\/
a\u003e","truncated":false,"in_reply_to_status_id":null,"in_reply_t
o_status_id_str":null,"in_reply_to_user_id":null,"in_reply_to_user
_id_str":null,"in_reply_to_screen_name":null,"user":{"id":467539697
0,"id_str":"4675396970","name":"Disney Dude","screen_name":"DisneyDu
der","location":"Disneyland, CA","url":null,"description":null,"pro
tected":false,"verified":false,"followers_count":1237,"friends_coun
t":18,"listed_count":479,"favourites_count":37104,"statuses_count":
37439,"created_at":"Wed Dec 30 00:41:42 +0000 2015","utc_offset":nu
ll,"time_zone":null,"geo_enabled":false,"lang":"en","contributors_e
...

huhSo, not quite garble maybe, but still ‘not a chance’ territory. What we need is something that can make sense of all of this, cut out the junk, and arrange it how we need it for sentiment analysis.

To do this we’re going to employ a second Python script that you can find here. We use a bunch of other Python packages here that you might also need to install with pip – json, pandas, matplotlib, and TextBlob (which contains the NLTK libraries I mentioned before). If you don’t want to go to Github (luddite), the code you’ll need is here:

import json
import pandas as pd
import matplotlib.pyplot as plt
from textblob import TextBlob
import re

# These functions come from https://github.com/adilmoujahid/Twitter_Analytics/blob/master/analyze_tweets.py and http://www.geeksforgeeks.org/twitter-sentiment-analysis-using-python//

def extract_link(text):
    """
    This function removes any links in the tweet - we'll put them back more cleanly later
    """
    regex = r'https?://[^\s<>"]+|www\.[^\s<>"]+'
    match = re.search(regex, text)
    if match:
        return match.group()
    return ''

def word_in_text(word, text):
    """
    Use regex to figure out which park or ride they're talking about.
    I might use this in future in combination with my wikipedia scraping script.
    """
    word = word.lower()
    text = text.lower()
    match = re.search(word, text, re.I)
    if match:
        return True
    return False

def clean_tweet(tweet):
        '''
        Utility function to clean tweet text by removing links, special characters
        using simple regex statements.
        '''
        return ' '.join(re.sub("(@[A-Za-z0-9]+)|([^0-9A-Za-z \t])|(\w+:\/\/\S+)", " ", tweet).split())

def get_tweet_sentiment(tweet):
    '''
    Utility function to classify sentiment of passed tweet
    using textblob's sentiment method
    '''
    # create TextBlob object of passed tweet text
    analysis = TextBlob(clean_tweet(tweet))
    # set sentiment
    if analysis.sentiment.polarity > 0:
        return 'positive'
    elif analysis.sentiment.polarity == 0:
        return 'neutral'
    else:
        return 'negative'

# Load up the file generated from the Twitter stream capture.
# I've assumed it's loaded in a folder called data which I won't upload because git.
tweets_data_path = '../data/twitter_themeparks.txt'

tweets_data = []
tweets_file = open(tweets_data_path, "r")
for line in tweets_file:
    try:
        tweet = json.loads(line)
        tweets_data.append(tweet)
    except:
        continue
# Check you've created a list that actually has a length. Huzzah!
print len(tweets_data)

# Turn the tweets_data list into a Pandas DataFrame with a wide section of True/False for which park they talk about
# (Adaped from https://github.com/adilmoujahid/Twitter_Analytics/blob/master/analyze_tweets.py)
tweets = pd.DataFrame()
tweets['user_name'] = map(lambda tweet: tweet['user']['name'] if tweet['user'] != None else None, tweets_data)
tweets['followers'] = map(lambda tweet: tweet['user']['followers_count'] if tweet['user'] != None else None, tweets_data)
tweets['text'] = map(lambda tweet: tweet['text'], tweets_data)
tweets['retweets'] = map(lambda tweet: tweet['retweet_count'], tweets_data)
tweets['disney'] = tweets['text'].apply(lambda tweet: word_in_text(r'(disney|magickingdom|epcot|WDW|animalkingdom|hollywood)', tweet))
tweets['universal'] = tweets['text'].apply(lambda tweet: word_in_text(r'(universal|potter)', tweet))
tweets['efteling'] = tweets['text'].apply(lambda tweet: word_in_text('efteling', tweet))
tweets['link'] = tweets['text'].apply(lambda tweet: extract_link(tweet))
tweets['sentiment'] = tweets['text'].apply(lambda tweet: get_tweet_sentiment(tweet))

# I want to add in a column called 'park' as well that will list which park is being talked about, and add an entry for 'unknown'
# I'm 100% sure there's a better way to do this...
park = []
for index, tweet in tweets.iterrows():
    if tweet['disney']:
        park.append('disney')
    else:
        if tweet['universal']:
            park.append('universal')
        else:
            if tweet['efteling']:
                park.append('efteling')
            else:
                park.append('unknown')

tweets['park'] = park

# Create a dataset that will be used in a graph of tweet count by park
parks = ['disney', 'universal', 'efteling']
tweets_by_park = [tweets['disney'].value_counts()[True], tweets['universal'].value_counts()[True], tweets['efteling'].value_counts()[True]]
x_pos = list(range(len(parks)))
width = 0.8
fig, ax = plt.subplots()
plt.bar(x_pos, tweets_by_park, width, alpha=1, color='g')

# Set axis labels and ticks
ax.set_ylabel('Number of tweets', fontsize=15)
ax.set_title('Tweet Frequency: disney vs. universal vs. efteling', fontsize=10, fontweight='bold')
ax.set_xticks([p + 0.4 * width for p in x_pos])
ax.set_xticklabels(parks)
# You need to write this for the graph to actually appear.
plt.show()

# Create a graph of the proportion of positive, negative and neutral tweets for each park
# I have to do two groupby's here because I want proportion within each park, not global proportions.
sent_by_park = tweets.groupby(['park', 'sentiment']).size().groupby(level = 0).transform(lambda x: x/x.sum()).unstack()
sent_by_park.plot(kind = 'bar' )
plt.title('Tweet Sentiment proportions by park')
plt.show()

The Results

If you run this in your terminal, it spits out how many tweets you recorded overall, then gives these two graphs:

tweet_freqtweet_sent

So you can see from the first graph that out of the tweets I could classify with my dodgy regex skills, Disney was by far the most talked about, followed by Universal a long way. This is possibly to do with genuine popularity of the parks and the enthusiasm of their fans, but it’s probably more to do with the variety of hashtags and keywords people use for Universal compared to Disney. In retrospect I should have added a lot more of the Universal brands as keywords – things like Marvel or NBC. Efteling words didn’t really pick up much at all which isn’t really surprising – most of the tweets would be in Dutch and I really don’t know what keywords they’re using to mark them. I’m not even sure how many Dutch people use Twitter!

The second graph shows something relatively more interesting – Disney parks seem to come out on top in terms of the proportion of positive tweets as well. This is somewhat surprising – after all Universal and Efteling should elicit the same levels of positive sentiment – but I really don’t trust these results at this point.  For one, there’s a good number of tweets I wasn’t able classify despite filtering the terms in the initial script. This is probably to do with my regex skills, but I’m happy that I’ve proved the point and done something useful in this article. Second, there’s far too many neutral tweets in the set, and while I know most tweets are purely informative (“Hey, an event happened!”) this is still too high for me to not be suspicious. When I dig into the tweets themselves I can find ones that are distinctly negative(“Two hours of park time wasted…”) that get classed as neutral. It seems that the stock NLTK library might not be all that was promised.

Stuff I’ll do next time

There’s a few things I could do here to improve my analysis. First, I need to work out what went wrong with my filtering and sorting terms that I ended up with so many unclassified tweets. There should be none, and I need to work out a way for both files to read from the same list.

Second, I should start digging into the language libraries in Python and start learning my own from collected data. This is basically linguistic machine learning, but it requires that I go through and rate the tweets myself – not really something I’m going to do. I need to figure out a way to label the data reliably then build my own libraries to learn from.

Finally, all this work could be presented a lot better in an interactive dashboard that runs off live data. I’ve had some experience with RShiny, but I don’t really want to switch software at this point as it would mean a massive slowdown in processing. Ideally I would work out a javascript solution that I can post on here.

Let me know how you go and what your results are. I’d love to see what things you apply this code to. A lot of credit goes to Adil Moujahid and Nikhil Kumar, upon whose code a lot of this is based. Check out their profiles on github when you get a chance.

Thanks for reading, see you next time 🙂

Using machine learning to improve predictions of visitor numbers

shintoepcot
The torii at EPCOT with the globe thing in the background

I wrote previously about using the Holt Winters model for time series analysis, particularly to predict the number of visitors to two of the world’s top theme parks next year. I am using annual data from the last ten or so years (which is all that’s available from the Themed Entertainment Association at this point), and unfortunately we could see quite easily that this sort of frequency of data (i.e. annual) was too sparse to make a decent prediction.

So the data are horrible, what are you going to do?

This kind of annoyed me -it takes ages to put together all this data in the first place and the results were disappointing. So I started thinking about other ways I could potentially model this using other data as well, and it was pretty easy to get general information about all these parks like their location, opening date and company ownership. I can imagine that parks that are close to each other are probably serving a similar crowd, and are subject to the same factors. Same with park ownership – the parent companies of these parks each have their own strategies, and parks with the same owner probably share in each other’s success or failures. But to allow for these sort of assumptions, I needed some way of adding this information to my model and let it use this sort of stuff to inform its predictions.

Machine Learning to the rescue

In current Data Science, Machine Learning is sort of a go to when the normal models fail. It allows us to take a vast array of complex information and use algorithms to learn patterns in the data and make some pretty amazing predictions. In this case we don’t really have Big Data like we would at a major corporation, but given that the numbers are pretty stable and we’re only trying to predict a few cases, it’s possible that this approach could improve our predictions.

Machine what now?

I know, it’s both a confusing and kind of ridiculous name. The whole idea started when Computer Scientists, Mathematicians and Statisticians started using computers to run equations millions of times over, using the results of each round, or ‘iteration’ of the calculation updating the next. It started with doing some pretty basic models, like linear and logistic regression over and over, testing the results and adjusting the weights of each factor in the model to improve them each time. Soon people started using these as building blocks in more complicated models, like Decision Trees, that evolved into Random Forests (which are the result of thousands or millions of decision trees). The sophistication of the building blocks improves daily, as does the ability to stack these blocks into more and more complex combinations of models. The winners of many Kaggle  competitions now take the most sophisticated of methods, and combine them for ridiculously accurate predictions of everything from rocket fuel usage to credit card risk. In this article I’m going to use one of the most popular algorithms, the Random Forest. I like these because they can be used for both numeric and categorical data, and do pretty well on both.

The results

This time we actually started getting pretty close to a decent model. Below you can see the graph of predicted and actual (labeled as ‘value’) visitor numbers for each park in 2015:

MLerrors.jpeg

It’s not too far off in a lot of cases, and pretty much everywhere it’s predicting just below what really happened, except for in the case of Disneyland Paris. In a few cases I’m way off, like for Universal Studios Japan, which could possibly due to the stellar performance of all the Universal parks recently. So with this information in hand, here’s my predictions for 2016:

DISNEY ANIMAL KINGDOM 10262808.79
DISNEY CALIFORNIA ADVENTURE 7859777.858
DISNEY HOLLYWOOD STUDIOS 10161975.17
DISNEYLAND 15850608.32
DISNEYLAND PARIS 11303153.4
EPCOT 11048540.24
EUROPA PARK 4600339.552
EVERLAND 7108378.079
HONG KONG DISNEYLAND 6508497.992
ISLANDS OF ADVENTURE 7419398.232
MAGIC KINGDOM 17124831.22
NAGASHIMA SPA LAND 5305896.091
OCEAN PARK 6860359.451
SEAWORLD FL 5440392.711
TIVOLI GARDENS 4249590.638
TOKYO DISNEY SEA 13529866.78
TOKYO DISNEYLAND 15279509.39
UNIVERSAL STUDIOS FL 7079618.369
UNIVERSAL STUDIOS HOLLYWOOD 5956300.006
UNIVERSAL STUDIOS JAPAN 9611463.005

If you want to see how these relate to my 2015 predictions, here’s a graph:

predictionsparks

 

Future stuff

As usual, I can still see a whole lot of things I can do to improve this model. At the moment there’s only two variables ‘moving’ with each row – the date and the visitor number. I could add a few more features to my model to improve things – the GDP of the country that park is in for example.

Second, Random Forests are notoriously bad at predicting time series data. In this case I converted the year of the data into a numeric vector rather than a date, adding 1 to the variable for the prediction. Given that each entry for each park was an even number of days apart (365 each row) I think that’s fair, but maybe I can’t treat annual entries that way. But to be fair, there doesn’t seem to be many models particularly good at predicting time series. There are suggestions of using artificial neural networks , but these aren’t particularly noted in time-series or spatio-temporal modelling. I think ‘Data Science’ needs to draw a bit more from Statistics in this case, and I’ll probably look in that direction for improved results in future. Given that it’s annual data I have the advantage of having a long time to process my model, so things likeMCMC using STAN might be promising here.

Finally, I need to get more practice at using ggplot2 for pretty graphs. I know a few tricks but my coding chops really aren’t up to building things with the right labels in the right places, especially when there are really long names. In this article I spent ages trying to fit the names of the parks into the first graph, but in the end I really couldn’t figure it out without making it really ugly. I’d love to be able to add my predictions as extensions on a line plot of the observed data, but that seems like epic level ggplot ninja-ing.

I’ll probably continue to attempt improving my predictions because it makes me feel like a wizard, but at this point I’ll most likely try this by playing with different models rather than ‘feature engineering’, which is most popular in Kaggle.

I’m always keen to hear people’s feedback and I’d love to improve my analyses based on people’s suggestions. Do you think my estimates are accurate, or is there something major I’ve missed?