Getting Disney ride lists from Wikipedia using Python and BeautifulSoup

disneysoup
This soup is not beautiful

I’ve been pretty quiet on this blog for the last few weeks, because as I mentioned a few times I was hitting the limit of what I could do with the data I could collect manually. Manual data collection is one of my most hated tasks since working as a researcher in the Social Sciences. Back then we had to encode thousands of surveys manually, but in a scenario where the outcome was within a set range of parameters (their answers had to add up to 100 for example). They insisted at the time on manually checking the input , and (groan) colour coding the spreadsheets by hand when there looked like there was a problem. It was the first time I had used conditional formatting in Excel to automate such an arduous task, and I remember everyone’s suspicion that I had finished so quickly.

 

Nowadays I work in a tech company dealing with the proverbial ‘Big Data’ that everyone goes on about. In these scenarios manual coding or checking of your data is not just arduous, it’s absolutely impossible so automation of your task is a necessity.

Grey Data

A recent article I read interviewing someone from Gartner stated that more than 99% of the information on the Internet is ‘grey data’. By this they mean unstructured, unformatted data with themes and meanings hidden beneath layers of language, aesthetics, semiotics and code. Say I want to find out what people think about Universal theme parks in the universe of WordPress blogs. It’s pretty rare that the site itself is tagged with any metadata telling a machine ‘in this blog I’m talking about theme parks and how I feel about Universal’. However, if I can use a script that reads all the blogs that contain the words ‘theme park’ and ‘Universal’, I’d be somewhere closer to finding out how people feel about Universal Theme Parks generally. On top of this, all these blogs probably have memes about Universal attractions and IP, they all use specific fonts and layouts, they’ll all use images of the Universal offerings. If I were able to read these and classify them into something shorter and more standardised, I’d be able to learn a lot more about what people are saying.

From little things, big things grow

As someone with more of an analytical background than a data engineering one, I’ve always been afraid of building my own datasets. In statistics we keep telling each other that we’re specialists, but the reality of the Data Science world is that specialists are just not needed yet – if you’re going to make your bones in the industry you’re going to have to get generalist skills including querying MySQL and Hadoop, and using Spark and Python.  As such, the project I’ve undertaken is to start scraping Wikipedia (to begin with) and see if I can build a bit of a database of theme park knowledge that I can query, or analyse in R.

Scraping isn’t the hard part

So I started looking around online and found a few resources on scraping Wikipedia, but they were either outdated or simply didn’t seem to work. There was also the option of dbpedia, which uses the Linked Data Standards to try and build a sort of dynamic relational database online by scraping the less standardised site. This option sounded really useful, but it looks like they’re still very much trying to flesh out WikiDB and it’s unlikely they’ll get to theme park lists any time soon. So, it looks like I’m stuck with StackOverflow threads on what to do.

The first stuff I started looking at told me to use BeautifulSoup, which I had never heard of. In short, the way I use it is as a Python module that handles specific http requests for tables. It seems to be able to parse out the site code and use the standard flags to identify where the table starts and finishes, and then assign the table to an object in Python that you can do things to.

from bs4 import BeautifulSoup
import re
import urllib2
import csv

# Define the page you want to scrape and set up BeautifulSoup to do its magic
wiki = "http://en.wikipedia.org/wiki/List_of_Disney_theme_park_attractions"
header = {'User-Agent': 'Mozilla/5.0'} #Needed to prevent 403 error on Wikipedia
req = urllib2.Request(wiki,headers=header)
page = urllib2.urlopen(req)
soup = BeautifulSoup(page, "lxml")

attraction = []
full_details = []

But then all you have is a bunch of jargon that looks like this:

<td><a class=”mw-redirect” href=”/wiki/WEDway_people_mover” title=”WEDway people mover”>WEDway people mover</a> (aka Tomorrowland Transit Authority)</td>, <td bgcolor=”#FF8080″>Tomorrowland</td>, <td></td>, <td bgcolor=”#80FF80″>Tomorrowland</td>…

Which I can see has the right information in it, but really isn’t what I’m after for analysis. I need to be able to loop through all these, find the rows and figure out where a cell starts and finishes. Thankfully, BeautifulSoup recognises how these are flagged in our jargon string, so I can loop over the rows and cells in the table. Once I can do this, I’ll be able to make some sort of data frame that stores all this information in a concise and easily analysable format.

Learning to read what you’ve got

beastgif

If you’re planning to try and scrape a Wikipedia table, you’re going to have to spend a reasonable amount of time staring at the page you want to scrape to figure out what the code means (I’m sure the time spent here reduces greatly with a little bit of coding skill), to see how they’ve encoded the information we want.

In my case, each column of the table represents a ride in one of Disney’s theme parks, and the row represents the ride. The first column of the table is the name of the ride, and when that ride is in that park, the date  and region of the ride is written in that cell. Very easy to read, but difficult to get into the sort of ‘long’ formats (with individual columns for park, ride and features) that R and Python like to use.

The first thing I want to do is get the names of the parks that each ride is attached to. To do this, I define a function that looks for cells that have the specific  formatting the Park names are listed in, and returns all the park names in a list that I’ll use later (I still haven’t learned to make WordPress respect indentation, so you’ll have to do that yourself):

def get_park_names(table):
    '''
    get all the names of the parks in the table - they all have a unique style so I use that to identify them.
    '''
    park = []
    for row in table.findAll("tr"):
        for cell in row:
            a = str(cell)
            if 'style="width:7.14%"' in a:
                m = re.search('(?&lt;=title=")(.*)(?="&gt;)', a)
                park.append(m.group(0))
    return park

I also want to be able to tell if the ride is still open or not, which is encoded in my table with background colour:


def get_open_status(cell):
    '''
    find out whether the ride is still open or not based on the background color of the cell
    '''
    statuses = ["extinct", "planned", "operating"]
    status = ""
    if 'FF8080' in cell:
        status = statuses[0]
    else:
        if 'FFFF80' in cell:
            status = statuses[1]
        else:
            if '80FF80' in cell:
                status = statuses[2]
            else:
                if 'FFA500' in cell:
                    status = statuses[0]
    return status

Finally, I need to tie all this together, so I loop through the table cells and look for cells that aren’t empty. It gets the name of the park from the string using regex and puts it into a dict with park, ride name, and status then finally puts all the dicts into a list:


# We can do this for one table or many - you can just uncomment this line and unindent the outer for loop
#table = soup.find("table", { "class" : "wikitable"} )
tables = soup.findAll("table", { "class" : "wikitable"})
for table in tables:
    ## Get a list of all the names of the attractions
    park = get_park_names(table)
    for row in table.findAll("tr"):
        cells = row.findAll("td")
        #For each "tr", assign each "td" to a variable.
        if len(cells) &gt; 11: # I just counted the columns on the page to get this
            a = str(cells[0]) # Making it a string allows regex
            if "href=" in a: # Do this if the row has a link in it
                b = re.search('(?&lt;=title=")(.*)(?=")', a)
            if b is not None: # If there is no title in the row (like when the ride has no link) regex will return 'none'
            # some of the rows are subheadings, but they all contain 'list of' in the string
                if "List of" not in b.group(0):
                    attraction.append(b.group(0))
                    a = b.group(0)
                else:
                    d = re.search("(?&lt;=title=')(.*)(?=')", a) # There is a lack of standardization in the table regarding quotations.
                    if "List of" not in d.group(0):
                        attraction.append(d.group(0))
                        a = d.group(0)
                    else: # The cells with no links just have the name
                        e = re.search('(?&lt;=&gt;)(.*)(?=&lt;)', a)
                        attraction.append(e.group(0))
                        a = e.group(0)
                x = 0 # Make a counter
                for c in cells[1:]:
                    if len(c) &gt; 0: # loop through the cells in each row that aren't blank
                        c = str(c)
                        s = get_open_status(c) #use the function I defined above
                        if "List of" not in c:
                            qqq = {"park": park[x], "ride":a, "status": s} #throw it all into a dict
                            full_details.append(qqq) # I make a list of dicts because it seems like a useful format
                    x = x + 1

So, not really knowing what I want to do with all this new data yet, my final move in the script is to write the whole thing to a csv file:


keys = full_details[0].keys()
with open('parkrides.csv', 'wb') as output_file:
    dict_writer = csv.DictWriter(output_file, keys)
    dict_writer.writeheader()
    dict_writer.writerows(full_details)

And there you have it! A reasonably clean csv that I can read into R or whatever else and start doing some analyses.

Things I learned

The first thing I learned from this exercise is not to feel too dumb when embarking on a new automation task – it looks like there is a tonne of resources available but it will take all your critical literacy skills to figure out which ones are actually true. Or you can just copy and paste their code to realise it doesn’t work. This is really a frustrating experience for someone starting out, especially when you’re led to believe it’s easy. My advice here is to keep looking until you find something that works – it’s usually not the highest hit on Google, but it’s there.

The second thing I learned is that regex is a harsh mistress. Even once you’ve managed to figure out how the whole thing works, you have to do a lot of squinting to figure out what you’re going to tell it to do. Here I don’t think there’s much more one can do except practice more.

Future stuff

There is a whole bunch of things I’m planning to do now I can do this. This first thing will be to try and build some set visualisations to look at which parks are most similar to each other. Disney port their successful rides from one park to the next, so it’s really interesting to see what the overlap is, as it shows what they might be expecting their audiences in that area to like. Rides that feature in more parks could be seen as more universally popular, while rides that only ever go to one park are probably more popular with a local audience. In particular, I’d expect Disneyland Paris to have a smaller overlap with other Disney parks based on my previous clustering of theme park audiences which said that Disneyland Paris caters to a more local audience.

 

A summary of the latest theme park reports: What do they tell us?

 

dubai
Concept Art of Dubai Parks and Resorts credit:Dubai Parks and Resorts

I’ve been interested in the recent reports released recently by OmnicoGroup and Picsolve, two market research companies affiliated with Blooloop who publishes news and research related to the theme park and entertainment industry. Both have been publicised pretty widely on Twitter and in the mainstream media, but each explore different aspects of the current and future theme park visitor experience.

 

OmnicoGroup: The Theme Park Barometer

The offering from OmnicoGroup is the Theme Park Barometer, a set of questions based on another of their surveys. The full report is 15 pages long, and offers the responses to 38 questions based on 677 UK, 684 US and 670 Chinese respondents. The questions cover the full pre, during and post visit experience including what visitors do as well as what they’d like to do. Their main focus is on the online activities of visitors before and during their visit (18 questions).

Picsolve: The Theme Park of the Future

This report was based on research for Dubai Parks and Resorts, and focuses on  photos (7 questions) as well as wearables and cashless technology (8 questions). The whole report is 12 pages, and reports the results of 24 questions based on responses from 500 Dubai residents that had attended a Dubai theme park in the last year. Their questions focus almost entirely on future technology, and it seems their research serves as an exploration of some specific ideas more than a full strategic suggestion.

The results

The two surveys cover eight subjects, with four subjects covered exclusively by one survey or the other. The subjects as I can derive are:

Subject Description
During Visit What visitors do and want during their visit
During Visit Online What visitors do and want to do online during their visit
Post Visit What visitors do and want to do online after their visit
Pre Visit What visitors do and want to do before their visit
Photos What visitors do and want to do with the photos and videos of their visit
Wearables and Cashless The current and future usage of wearable and cashless devices
VR and AR What visitors expect from Virtual and Augmented technology during their visit
Merchandise What visitors do and want from their shopping experience during their visit.


I’ve posted the full set of responses to both surveys by subject, as it makes it a little easier to digest it all. However, despite half the subjects being covered by both surveys, it’s difficult to combine the two for many valuable insights.

Probably the most promising subject to look into here is the During Visit Online block, as it has by far the most coverage. Ignoring the different audiences surveyed, we can see that recommendation systems and information applications are highly desirable for a park, however there is still a quarter unable to connect to the internet (24%). While there is a lot of interest in the industry in selling the park online and building the post-visit experience, the two reports suggest that today’s parks may have a long way to go to satisfy the visitor’s expectations during the visit itself.

Another insight is that while Picsolve is reporting a large number of people expecting a range of advanced technology in the future, the OmnicoGroup research shows that they’re not expecting it in at least the next three years. For example, while around 90% have said they would attend a theme park offering Virtual Reality in the Picsolve report, only 65% expect to see this in the next the three years according to the OmnicoGroup report. Another example is 42% of people saying they want holographic entertainment in queues, but only 20% expected to see holographic assistants in the next three years. Admittedly having holograms in queues that just talk at you isn’t the same as the AI suggested by the term ‘holographic assistant’, but it shows that none of these expectations are for the near future.

Other than the few insights we see by crossing the two reports, there is also some evidence that despite some of their expectations of online services at the park not being met, visitors still want to engage with the park after the experience. However, the low positive response to questions asking about what visitors actually do after the experience suggests that these post-experience needs are not being met either. The same pattern holds for OmnicoGroup’s pre-visit questions, indicating that there is still a lot of low-hanging fruit to grab from visitors before and after their visit. This area should be particularly lucrative for parks considering that serving a lot of the needs visitors indicated they had, such as reserving merchandise (71%), ride times (81%) and VIP experiences (82%) really cost very little to develop and maintain compared to physical features during the visit.

Some criticisms

While I’m really interested in the results of these surveys, there are a couple of things that bug me about them as well.

First is that I don’t know how many people the OmnicoGroup report is based on (edit: since posting this OmnicoGroup contacted me with their respondent numbers – many thanks!). I may have missed this and they have been very responsive and helpful, so I can’t hold this against them.

Second, both reports ask very specific questions, then report the answers as if they were spontaneously offered. It’s a very different thing to ask ‘would you accept temporary tattoos for cashless payments’ and have 91% of people say yes than it is to say ‘91% of people want temporary tattoos for cashless payments’. My point here is not that the questions they asked were wrong or irrelevant, it’s that it is very easy to overclaim using this method. As it stands I know certain things about the theme park audience now, but I don’t know what they might have responded to any other question in the world. Given that the questions of either survey don’t cover the full process of the theme park experience (and don’t claim to), I don’t see how these reports could be used meaningfully for any business, operational or strategic decisions without it being something of a magic ball.

Finally, the Picsolve report is so focussed on specific areas of the theme park experience, I really can’t tell if it’s research or a sales pitch. Most of the images were provided by Dubai Parks and Resorts, and the final pages are dedicated to talking in general about how much the Dubai market is growing. Further to this, I don’t know how many people who live in Dubai would actually attend a Dubai Theme Park, so I’m not sure the population is really that relevant. On the other hand, a lot of what they write in this report is corroborated by the comments in the Themed Entertainment Association reports, so they’re either singing from the same songbook or reading the same reports as me.

What I learned

These reports are highly publicised and look like they took a lot of work to put together. However, something I’m learning is the importance of packaging my findings in a very different way from what Tufte would ask us to do. While statistical visualisations provide a very efficient way of communicating rich information in a small amount of time, it seems like for many people sheets of graphs and numbers is drinking from a firehose.

On the other hand, I’ve also learned that even minor crossover between two surveys can provide really valuable and useful insights, if only at a general level. I may spend more time in future looking through these results to see what else I can find, and I look forward to building up more of a database of this type of research as it’s released.

So what do you think? Have I made the mistake of looking too hard into the results, or have I missed other useful insights?

During Visit
Survey Question Overall US/UK China
Omnigroup Expect to see holographic assistants in theme parks in the next three years 20% 15% 32%
Omnigroup Expect to see robots as personal assistants in theme parks in the next three years 31% 22% 49%
Picsolve 3D Holograms and lasers made the visit more enjoyable 48%
Picsolve Want holographic videos in queues 42%
Picsolve Want performing actors in queues 41%
Picsolve Want multisensory experiences in queues 38%
Online services during visit
Survey Question Overall US/UK China
Picsolve Unable to log into wifi during visit 13%
Picsolve Unable to find any internet connection during visit 24%
Picsolve Want apps related to the ride with games and entertainment 40%
Picsolve Would be more likely to visit a theme park offering merchandise through a virtual store 85%
Omnigroup Want ability to buy anything in the resort with a cashless device 82% 77% 91%
Omnigroup Want ability to order a table for lunch or dinner and be personally welcomed on arrival 82% 79% 87%
Omnigroup Want recommendations for relevant deals 85% 84%
Omnigroup Want alerts for best times to visit restaurants for fast service 82% 81% 85%
Omnigroup Providing an immediate response to queries and complaints would encourage more engagement on social media during the visit 54% 50% 62%
Omnigroup Offering a discount on rides for sharing photos would encourage more engagement on social media during the visit 50% 47% 57%
Omnigroup Want recommendations for offers to spend in the park 77% 75% 81%
Omnigroup Want recommendations for merchandise and show tickets 74% 71% 80%
Omnigroup Expect to see voice activated mobile apps in theme parks in the next three years 41% 41% 41%
Omnigroup Expect to see Personal digital assistants in theme parks in the next three years 38% 36% 43%
Post-visit
Survey Question Overall US/UK China
Omnigroup Want ability to review trip and receive offers to encourage return visits 81% 79% 84%
Omnigroup Looked at or shared park videos after the visit 50% 41% 69%
Omnigroup Looked at deals or promotions to book next visit after the visit 44% 37% 56%
Omnigroup Posted a review about the stay after the visit 44% 34% 60%
Omnigroup Ordered further merchandise seen during the visit after the visit 25% 16% 42%
Pre-visit online services
Survey Question Overall US/UK China
Omnigroup Pre-booked dining plans before the visit 32% 28% 39%
Omnigroup Pre-booked timeslots on all rides before the visit 31% 24% 44%
Omnigroup Pre-ordered branded purchase before the visit 18% 13% 26%
Omnigroup Want ability to reserve merchandise online before arriving at the resort and collect it at the hotel or pickup point. 71% 65% 83%
Omnigroup Want ability to pre-book dining options for the entire visit 81% 80% 84%
Omnigroup Want to pre-book an entire trip (including meals, etc.) in a single process using a mobile app 89% 90% 91%
Omnigroup Want ability to pre-book a VIP experience 82%
Omnigroup Researched general information about the Park online before the visit 67% 64% 72%
Omnigroup Got directions to particular attractions at the resort before the visit 44% 35% 62%
Photos
Survey Question Overall
Picsolve Want ‘selfie points’ in queues 45%
Picsolve Ability to take photos from rides improves park experience 56%
Picsolve Would visit a theme park offering on-ride videos 90%
Picsolve Would visit a theme park offering AR videos of park moments 88%
Picsolve Would prefer park photos to be sent directly to their phone 90%
Wearables
Survey Question Overall US/UK China
Picsolve Want to use wearable devices for a connected experience within parks 82%
Picsolve Would use wearables to check queue wait times 91%
Picsolve Agree wearables would be an ideal purchasing method 90%
Picsolve Would use wearables to link all park photography in one place 88%
Picsolve Would use wearables to track heart rate and adrenaline on rides 86%
Picsolve Would use wearables to track the number of steps they take at the park 84%
Picsolve Would be more inclined to visit a theme park offering wearable technology for self-service payments 90%
Picsolve Would consider visiting a park offering self-service checkouts 89%
Omnigroup Want ability to buy anything in the resort with a cashless device 82% 77% 91%
Omnigroup Want the park to offer a wide range of options on mobile apps 84% 83% 87%
Omnigroup Want ability to give their friends/family a cashless wristband and have a mobile app to track a topup payments 75% 73% 79%
Omnigroup Expect to see temporary tattoos in place of wristbands in theme parks in the next three years 27% 23% 35%
Virtual and Augmented Reality
Survey Question Overall US/UK China
Picsolve Would be more likely to visit a theme park with VR 94%
Picsolve Would be more likely to visit a theme park with VR based rides 87%
Picsolve Would be interested in VR headsets to view ride photography or videos during the visit 95%
Picsolve Would visit a theme park offering AR videos of park moments 88%
Omnigroup Expect to see Virtual Reality in theme parks in the next three years 65% 62% 70%
Omnigroup Expect to see Augmented Reality games in theme parks in the next three years 33% 25% 49%
Merchandise and Retail
Survey Question Overall US/UK China
Omnigroup Want stores to find merchandise and deliver it to the hotel room or home if the size, colour or style of merchandise is not available 75% 72% 80%
Omnigroup Want stores to find merchandise and arrange for pickup if the size, colour or style of merchandise is not available 72% 70% 75%
Omnigroup Want ability to buy merchandise in resort and have it delivered to home 74% 70% 82%
Omnigroup Want ability to buy merchandise over an app while in queue and have it delivered to home 75% 70% 84%
Omnigroup Want ability to order anywhere in resort for delivery anywhere 77% 73% 84%
Omnigroup Expect to see 3d print personal merchandise in theme parks in the next three years 36% 29% 51%
Omnigroup Want ability to purchase gifts for friends and family for the next visit 81%
Omnigroup Want ability to split restaurant bills 79%

Using ARIMA for improved estimates of Theme park visitor numbers over time

tomorrowland
The entry to Tomorrowland at Magic Kingdom Florida

I’ve now had two attempts at predicting theme park visitor numbers, the first using Holt Winters and the second using Random Forests. Neither one really gave me results I was happy with.

Holt Winters turned out to be a misguided attempt in the first place, because most of its power comes from the seasonality in the data and I am stuck using annual measurements. Given the pathetic performance of this method, I turned to the Data Scientists go-to: Machine Learning.

The Random Forest model I built did a lot better at predicting numbers for a single year, but its predictions didn’t change much from year to year as it didn’t recognise the year of measurement as a reading of time. This meant that the ‘year’ variable was much less important than it should have been.

ARIMA: A new hope

Talking to people I work with (luckily for me I get to hang out with some of the most sophisticated and innovative Data Scientists in the world), they asked why I hadn’t tried ARIMA yet. Given that I have annual data, this method would seem to be the most appropriate and to be honest I just hadn’t thought of it because it had never crossed my path.

So I started looking into the approach, and it doesn’t seem to difficult to implement. Basically you need to at least find three numbers in place of p, d, and q: the order of the autoregressive part of the model (an effect that changes over time), the degree of differencing (the level of ‘integration’ between the other two parameters AFAIK), and the order of the moving average part of the model (the how much the level of error of the model changes over time). You can select these numbers through trial and error, or you can use the auto.arima() function in R that will give you the ‘optimal’ model that produces the least possible error from the data. Each of these parameters actually has a real interpretation, so you can actually base your ‘trial and error’ on some intelligent hypotheses about what the data are doing if you are willing to spend the time deep diving into these parameters. In my case I just went with the grid search approach with the auto.arima()  function, which told me to go with p = 0 , d  = 2 and q = 0.

The results

ARIMA seems to overcome both the lack of frequency in the data as well as the inability of Random Forests to take account of time as a variable. In these results I focus on the newly reinvigorated Universal vs. Disney rivalry in their two main battlegrounds – Florida and Japan.

Here are the ARIMA based predictions for the Florida parks:

usfarimamkarima

usfarimanumsmkarimanums

Both are definitely improving their performance over time, but as both the Holt-Winters and the Random Forest model predicted – Universal Studios is highly unlikely to catch up to Magic Kingdom in its performance. However, unlike the Holt-Winters model, the ARIMA predictions actually have Universal overtaking Disney well within the realm of possibility. Universal’s upper estimate for 2025 is just over 35 million visitors, while Magic Kingdom’s lower estimate for the same is around 25 million. In an extreme situation, it’s possible that Universal’s visitor numbers will have overtaken Magic Kingdom’s by 2025 if we go with what the ARIMA model tells us.

The story for the Japanese parks looks even better for Universal:

usjarmiatokyodisarima

usjarimanums

tokyodisarimanums

In these cases we see Universal continuing on their record-breaking rise, but things don’t look so good for Tokyo Disneyland. This is really interesting because both are pretty close replicates of their Florida counterparts and both exist in a booming market. For Tokyo Disney to not be seeing at least a predicted increase in visitor numbers, something must be reasonably off. The prediction even shows a good possibility of Tokyo Disneyland beginning to get negative visitor numbers, suggesting the park’s future may be limited.

Things I learned

ARIMA definitely seems to be the way to go with annual data, and if I go further down the prediction route (which is pretty likely to be honest) I’ll probably do so by looking at different ways of playing with this modelling approach. This time I used the grid search approach to finding my model parameters, but I’m pretty suspicious of that, not least because I can see myself stuttering to justify my choices when faced with a large panel of angry executives. “The computer told me so” seems like a pretty weak justification outside of tech companies that have the experience of trusting the computer and things going well. There is clearly a lot of better methods of finding the optimal parameters for the model, and I think it would be worth looking into this.

I’m also starting to build my suspicion that Disney’s days at the top of the theme park heap are numbered. My recent clustering showed the growing power of a new audience that I suspect is largely young people with no children who have found themselves with a little bit of expendable income all of a sudden. On the other hand, Magic Kingdom and Tokyo Disney serve a different market that arguably consists more of older visitors whose children have now grown up and don’t see the fun in attending theme parks themselves.

Future things

I’ve read about hybrid or ensemble models pretty commonly, which sounds like a useful approach. The basic idea is that you make predictions from multiple models and this produces better results than any individual model on its own. Given how terrible my previous two models have been I don’t think this would help much, but it’s possible that combining different ARIMA models of different groupings could produce better results than a single overall model. Rob Hyndman has written about such approaches recently, but has largely focussed on different ways of doing this with seasonal effects rather than overall predictions.

I also want to learn a lot more about how the ARIMA model parameters affect the final predictions, and how I can add spatial or organisational information to the predictions to make them a little more realistic. For example, I could use the ARIMA predictions for the years where I have observed numbers as input to a machine learning model, then use the future ARIMA predictions in the test data as well.

Do you think my predictions are getting more or less believable over time? What other ideas could I try to get more information out of my data? Is Universal going to be the new ruler of theme parks, throwing us into a brave new unmapped world of a young and wealthy market, or can Disney innovate fast enough to retain their post for another generation to come?  Looking forward to hearing your comments.

 

Clustering theme parks by their audience

hwexpress
The conductor of the Hogwarts Express interacts with some young visitors at Universal’s Islands of Adventure.

I had a go recently at running a K-means clustering on the theme parks in the Themed Entertainment Associationreports by their opening dates and locations. This was pretty interesting in the end, and I was able to come up with a pretty nice story of how the parks all fell together.

But it made me wonder – what would it look like (and what would it mean!) if I did the same with visitor numbers?

 Competing for different audiences

Using the elbow method I described in my previous post, I again found that three or six clusters would be useful to describe my population.

clustererrorvisitors

Just like last time, I probably also could defend a choice of eight or even ten clusters, but I really don’t want to be bothered describing that many groups. Joking aside, there is a limit to how many groups you can usefully produce from any cluster analysis – it’s not useful if it just adds complication.

But here’s the issue I ran into immediately:

Universal Studios Japan
Year Cluster (3) Cluster (6)
2006 2 3
2007 2 3
2008 2 3
2009 2 3
2010 2 3
2011 2 3
2012 2 6
2013 2 6
2014 2 6
2015 3 1

It moves clusters over the years! I shouldn’t really be surprised – it shows that these theme parks are changing the markets they attract as they add new attractions to the mix. Remember, in this exercise I’m describing audiences as observed by the parks they visit. In my interpretation of these results I assuming that audiences don’t change over time, but their image of the various theme parks around the world do change. Let’s look at the clusters:

Cluster 1: Magic Kingdom Crew

These are the audiences that love the Disney brand and are loyal to their prestige offerings. If they’re going to a park, it’s a Disney park.

Cluster 1
Magic Kingdom 2006-2015
Disneyland 2009-2015
Tokyo Disney 2013-2015

 

Cluster 2: Local Visitors

These parks are servicing local visitors from the domestic market.

Cluster 2
Disneyland 2006-2008
Disneyland Paris 2007-2009
Tokyo Disney Sea 2006-2015
Tokyo Disneyland 2006-2012

Cluster 3: The new audience

This is an audience that has only emerged recently and offering more profits, with those parks gaining their attention reaping the rewards, as seen by the membership of very successful parks in recent years.

Cluster 3
Disney Animal Kingdom 2006
Disney California Adventure 2012 -2014
Disney Hollywood Studios 2006
Everland 2006-2007, 2013-2015
Hong Kong Disneyland 2013
Islands of Adventure 2011-2015
Ocean Park 2012-2015
Universal Studios Florida 2013-2014
Universal Studios Hollywood 2015
Universal Studios Japan 2006- 2011

Cluster 4: The traditionalists

This group is defined by the type of visitor that attends Tivoli Gardens. Maybe they are more conservative than other theme park audiences, and see theme parks as a place primarily for children.

Cluster 4
Europa Park 2006-2014
Hong Kong Disneyland 2006-2010
Islands of Adventure 2009
Nagashima Spa Land 2006-2010
Ocean Park 2006-2009
Seaworld Florida 2010 – 2015
Tivoli Gardens 2006 -2015
Universal Studios Hollywood 2006-2011

Cluster 5: Asian boom market

This audience seems to be associated with the new wave of visitors from the Asian boom, as seen by the recent attention to Asian parks like Nagashima Spa Land.

Cluster 5
Disney California Adventure 2006-2011
Europa Park 2015
Everland 2008-2012
Hong Kong Disneyland 2011-2012, 2015
Islands of Adventure 2006-2008, 2010
Nagashima Spa Land 2011-2015
Ocean Park 2010-2011
Seaworld Florida 2006-2009, 2012
Universal Studios Florida 2006-2012
Universal Studios Hollywood 2012-2014

 

Cluster 6: Family visitors

These all seem like parks where you’d take your family for a visit, so that seems to be a likely feature of this cluster.

Cluster 6
Disney Animal Kingdom 2007-2015
Disney California Adventure 2015
Disney Hollywood Studios 2007-2015
Disneyland Paris 2010-2015
EPCOT 2006-2015
Tokyo Disney Sea 2011
Universal Studios Florida 2015
Universal Studios Japan 2014

I tried a couple of other methods- the last cluster for each park and the most frequent cluster for each park, but these really were even less informative than what I reproduced here. In the first case the clusters didn’t look much different and didn’t really change interpretation. This is probably because my interpretation relies on what I’ve learned about each of these parks, which is based on very recent information. In the second case, I reduced the number of clusters, but many of these were a single park (damn Tivoli Gardens and it’s outlier features!)

Lessons learned

This work was sloppy as anything – I really put very little faith in my interpretation. I learned here that a clustering is only as good as the data you give it, and in the next iteration I will probably try and combine the data from my previous post (some limited ‘park characteristics’) to see how that changes things. I expect the parks won’t move around between the clusters so much if I add that data, as audiences are much more localised than I’m giving them credit for.

I also learned that a simple interpretation of the data can still leave you riddled with doubt when it comes to the subjective aspects of the analysis. I have said that I am clustering ‘audience types’ here by observing how many people went to each ‘type’ of park. But I can’t really say that’s fair – just because two parks have similar numbers of visitors doesn’t imply that those are the same visitors. Intuitively it would say the opposite! I think adding in the location, owner and other information like the types of rides they have (scraping wikiDB in a future article!) would really help this.

Future stuff

Other than the couple of things I just mentioned, I’d love to start looking at the attractions different parks have and classifying them that way. Once I have the attraction data I could look at tying this to my visitor numbers or ownership data to see if I can determine which type of new attractions are most popular for visitors, or determine which attractions certain owners like the most. In addition, I can’t say I really know what these parks were like over the last ten years, nor what a lot of them are like now. Perhaps understanding more about the parks themselves would give some idea as to the types of audiences these clusters describe.

What do you think? Am I pulling stories out of thin air, or is there something to this method? Do you think the other parks in Cluster 3 will see the same success as Islands of Adventure and Universal Studios Japan have indicated they will see? I’d love to hear your thoughts.

Record numbers at Universal Studios Japan: The continued rise of Universal, or a story of the Asian Boom?

 

universal-studios-japan-main-entrance-night
 The Universal Studios Japan main entrance credit: Travelcaffeine.com

Today Universal Studios Japan released a report showing that they had received a record number of visitors last month. The news led me to wonder – was this new record the result of Universal Studios’ meteoric rise as of late, or was it more a symptom of the renewed interest in Asian theme parks in the last few years?

Pulling apart the causes of things with multivariate regression

One of the most basic tools in the Data Scientist toolkit is multivariate regression. Not only is this a useful model in its own right, but I’ve also used its output as a component of other models in the past.  Basically it looks at how much the change in each predictor explains the change in the outcome and gives each variable a weighting. It only works when you have linear data, but people tend to use it as a starting point for pretty much every question with a bunch of predictors and a continuous outcome.

Is the Universal Studios Japan record because it is Universal, or because it’s in Asia?

To answer this question I ran a multivariate regression on  annual park visitor numbers using dummy variables indicating whether the park was Universal owned, and whether it was in Asia. After a decent amount of messing around in ggplot, I managed to produce these two plots:

universalregression
Black is not Universal, red is Universal
asiaregression
Black is not Asia, red is Asia

In these two plots we can see that the Universal parks are catching up to the non-Universal parks, while the Asian parks still aren’t keeping pace with the non-Asian parks. So far this is looking good for the Universal annual report!

This is confirmed by the regression model, the results of which are pasted below:

Coefficients:
Estimate Std. Error t value p-value
(Intercept) 7831953 773691 10.123 2.00E-16
year 126228 125587 1.005 0.3158
universal -3522019 1735562 -2.029 0.0435
asia -1148589 1228394 -0.935 0.3507
universal*asia 3044323 3341146 0.911 0.3631
year*universal 234512 280112 0.837 0.4033
year*asia 31886 193528 0.165 0.8693
year*universal*asia 267672 536856 0.499 0.6185

In this we can see that firstly, only Universal ownership has a significant effect in the model. But you can also see the Estimate of the effect is negative, which is confusing until you control for time, which is the year*universal row of the table.  We can see here that for each consecutive year, we expect a Universal park to gain 234512 more visitors than a non-Universal park. On the other hand, we’d only expect and Asian park to have 31866 more visitors than a non-Asian park for each consecutive year over the dataset. This suggests that being a Universal Park is far more responsible for Universal Studios Japan’s record visitor numbers than it’s location. However, the model fit for this is really bad : .02 , which suggests I’m doing worse than stabbing in the dark in reality.

Lessons learned

The main thing I learned is that it’s really complicated to get you head around interpreting multivariate regression. Despite it being one of the things you learn in first year statistics, and something I’ve taught multiple times, it still boggles the brain to work in many dimensions of data.

The second thing I learned is that I need to learn more about the business structure of the theme park industry to be able to provide valuable insights based on models from the right variables. Having such a terrible model fit usually says there’s something major I’ve forgotten, so getting a bit more knowledgable about how things are done in these areas would give me an idea of the variables I need to add to increase my accuracy.

Future things to do

The first thing to do here would be to increase my dataset with more parks and more variables – I think even after a small number of posts I’m starting to hit the wall with what I can do analytically.

Second thing I want to try is to go back to the Random Forest model I made that seemed to be predicting things pretty well. I should interrogate that model to get the importance of the variables (a pretty trivial task in R), which would confirm or deny that ownership is more important than being in Asia.

What do you think? Are my results believable? Is this truly the result of the excellent strategic and marketing work done by Universal in recent years, or is it just luck that they’re in the right place at the right time? One thing is certain: the theme park world is changing players, and between Universal’s charge to the top and the ominous growth of the Chinese megaparks, Disney is going to have a run for its money in the next few years.

 

Using machine learning to improve predictions of visitor numbers

shintoepcot
The torii at EPCOT with the globe thing in the background

I wrote previously about using the Holt Winters model for time series analysis, particularly to predict the number of visitors to two of the world’s top theme parks next year. I am using annual data from the last ten or so years (which is all that’s available from the Themed Entertainment Association at this point), and unfortunately we could see quite easily that this sort of frequency of data (i.e. annual) was too sparse to make a decent prediction.

So the data are horrible, what are you going to do?

This kind of annoyed me -it takes ages to put together all this data in the first place and the results were disappointing. So I started thinking about other ways I could potentially model this using other data as well, and it was pretty easy to get general information about all these parks like their location, opening date and company ownership. I can imagine that parks that are close to each other are probably serving a similar crowd, and are subject to the same factors. Same with park ownership – the parent companies of these parks each have their own strategies, and parks with the same owner probably share in each other’s success or failures. But to allow for these sort of assumptions, I needed some way of adding this information to my model and let it use this sort of stuff to inform its predictions.

Machine Learning to the rescue

In current Data Science, Machine Learning is sort of a go to when the normal models fail. It allows us to take a vast array of complex information and use algorithms to learn patterns in the data and make some pretty amazing predictions. In this case we don’t really have Big Data like we would at a major corporation, but given that the numbers are pretty stable and we’re only trying to predict a few cases, it’s possible that this approach could improve our predictions.

Machine what now?

I know, it’s both a confusing and kind of ridiculous name. The whole idea started when Computer Scientists, Mathematicians and Statisticians started using computers to run equations millions of times over, using the results of each round, or ‘iteration’ of the calculation updating the next. It started with doing some pretty basic models, like linear and logistic regression over and over, testing the results and adjusting the weights of each factor in the model to improve them each time. Soon people started using these as building blocks in more complicated models, like Decision Trees, that evolved into Random Forests (which are the result of thousands or millions of decision trees). The sophistication of the building blocks improves daily, as does the ability to stack these blocks into more and more complex combinations of models. The winners of many Kaggle  competitions now take the most sophisticated of methods, and combine them for ridiculously accurate predictions of everything from rocket fuel usage to credit card risk. In this article I’m going to use one of the most popular algorithms, the Random Forest. I like these because they can be used for both numeric and categorical data, and do pretty well on both.

The results

This time we actually started getting pretty close to a decent model. Below you can see the graph of predicted and actual (labeled as ‘value’) visitor numbers for each park in 2015:

MLerrors.jpeg

It’s not too far off in a lot of cases, and pretty much everywhere it’s predicting just below what really happened, except for in the case of Disneyland Paris. In a few cases I’m way off, like for Universal Studios Japan, which could possibly due to the stellar performance of all the Universal parks recently. So with this information in hand, here’s my predictions for 2016:

DISNEY ANIMAL KINGDOM 10262808.79
DISNEY CALIFORNIA ADVENTURE 7859777.858
DISNEY HOLLYWOOD STUDIOS 10161975.17
DISNEYLAND 15850608.32
DISNEYLAND PARIS 11303153.4
EPCOT 11048540.24
EUROPA PARK 4600339.552
EVERLAND 7108378.079
HONG KONG DISNEYLAND 6508497.992
ISLANDS OF ADVENTURE 7419398.232
MAGIC KINGDOM 17124831.22
NAGASHIMA SPA LAND 5305896.091
OCEAN PARK 6860359.451
SEAWORLD FL 5440392.711
TIVOLI GARDENS 4249590.638
TOKYO DISNEY SEA 13529866.78
TOKYO DISNEYLAND 15279509.39
UNIVERSAL STUDIOS FL 7079618.369
UNIVERSAL STUDIOS HOLLYWOOD 5956300.006
UNIVERSAL STUDIOS JAPAN 9611463.005

If you want to see how these relate to my 2015 predictions, here’s a graph:

predictionsparks

 

Future stuff

As usual, I can still see a whole lot of things I can do to improve this model. At the moment there’s only two variables ‘moving’ with each row – the date and the visitor number. I could add a few more features to my model to improve things – the GDP of the country that park is in for example.

Second, Random Forests are notoriously bad at predicting time series data. In this case I converted the year of the data into a numeric vector rather than a date, adding 1 to the variable for the prediction. Given that each entry for each park was an even number of days apart (365 each row) I think that’s fair, but maybe I can’t treat annual entries that way. But to be fair, there doesn’t seem to be many models particularly good at predicting time series. There are suggestions of using artificial neural networks , but these aren’t particularly noted in time-series or spatio-temporal modelling. I think ‘Data Science’ needs to draw a bit more from Statistics in this case, and I’ll probably look in that direction for improved results in future. Given that it’s annual data I have the advantage of having a long time to process my model, so things likeMCMC using STAN might be promising here.

Finally, I need to get more practice at using ggplot2 for pretty graphs. I know a few tricks but my coding chops really aren’t up to building things with the right labels in the right places, especially when there are really long names. In this article I spent ages trying to fit the names of the parks into the first graph, but in the end I really couldn’t figure it out without making it really ugly. I’d love to be able to add my predictions as extensions on a line plot of the observed data, but that seems like epic level ggplot ninja-ing.

I’ll probably continue to attempt improving my predictions because it makes me feel like a wizard, but at this point I’ll most likely try this by playing with different models rather than ‘feature engineering’, which is most popular in Kaggle.

I’m always keen to hear people’s feedback and I’d love to improve my analyses based on people’s suggestions. Do you think my estimates are accurate, or is there something major I’ve missed?

 

Predictions of Disney and Universal visitor numbers

africaak
Africa area of Disney’s Animal Kingdom

When thinking about theme parks, one of the most obvious questions is how to predict the number of visitors expected for the coming years. This is not easy to do, but even an approximate answer would help in planning ride maintenance and staffing levels.

Why is this so difficult?

There are a bunch of reasons it’s difficult to predict visitor numbers to any large attraction.

First, all theme parks around the world are subject to global economics – if a park attracts lots of visitors from an area that happens to have a war or a recession then all bets are off.

Second, in places like Orlando where there is a high concentration of parks the number of visitors at a specific park depends heavily on the popularity of other parks in the area.

Finally, when we are talking about a global audience, there are any number of issues that can arise that destroy a park’s precious season. In 2010 when Icelandic Volcano Eyjafjallajökull erupted unexpectedly, Danish park Tivoli Gardens saw a drop of 20,000 visitors.

How is it done?

When forecasting pretty much anything, the go-to method is called the Holt-Winters model. There is a whole lot of clever maths behind this, but what you need to know is that it looks at data collected over time (annually in our case), placing more importance on values it saw more recently than on the ones it saw a long time ago.

The data come from the Themed Entertainment Association annual reports, which are sort of canonical for the theme park industry. In this set we go back as far as their published reports allow – to 2006. This isn’t a particularly long time, especially considering that all we get is annual data, but at least we might be able to get some idea of what we could expect.

Who cares?

We have data for the top 22 or so parks for that time (the bottom few tend  to drop off every couple of years), but to show what we’re doing we’ll just look at the two major competitors in the theme park industry – Disney’s Magic Kingdom, and the first non-Disney competitor Universal Studios Florida. This is interesting because Universal has recently announced an aggressive new strategy, likely based on the success of its recent Harry Potter attractions. But can Universal expect its rise to continue, or will Magic Kingdom maintain it’s unbeatable position?

The results

Well, it doesn’t look particularly good for Universal’s strategy. Here are plots of Holt-Winter’s fitting of visitor numbers to the Magic Kingdom and Universal Studios:

universalhwmkhw

We can see that both parks are steady, but Universal Studios performs massively below Magic Kingdom. The redline shows the fitted Holt-Winters model, and to be honest I’m not that happy with it. Really we’re just predicting the value from the previous year, so I’m interested to see how it does with forecasting.

To see how the two parks might do against each other into the future, we use the Holt-Winters model to predict the next ten years of visitors:

universalforecast

mkforecast

We can see here that our (dumb) Holt-Winters model is predicting the Magic Kingdom to sustain its massive lead over Universal Studios. We can see this in the 80% confidence intervals for both parks at the ten year period – between 7 and 12.16 million visitors for Universal, and between 18.5 and 22.4 million for the Magic Kingdom. This isn’t even close to an overlap, and suggests that Universal has next to no chance of overtaking the Disney powerhouse.

The lessons

The main thing I learned from this exercise is that the Holt-Winters model is best suited to data that is more frequent than annual. The power of the model comes from estimating seasonal variations, so with monthly or even quarterly data our predictions would become a lot more interesting.

I also learned that Universal Studios may have been a little excitable by their recent success. It’s been many years since they were able to crack the Disney fortress of top ranks, and the Harry Potter world attraction seems to have had a bigger effect than they realise even at this point.

Future stuff

There is is whole lot more I’m intending to do with this data. Most immediately I’d like to be able to try and improve my forecasts by adding in information about the parks, such as their location. As I mentioned at the top of the article, the success of parks in places like Orlando and arguably the Benelux region are highly dependent on the performance of their competitors, so a model would likely be able to gain a lot of information from the performance of nearby parks.

I also want to see if there are groupings of parks according to their visitor numbers over time. Seeing different clusters of parks by this metric would suggest they are catering to different populations, and might indicate which parks were truly competing against each other.

This was fun to do, and a great experience to play around with some time series data. Hope you learned something!