Python

Scraping Airbnb

Airbnb is not exactly keen to share data that might help analyse its impact on local housing markets. In 2016, the Amsterdam Municipality decided to collect Airbnb data using a scraper - a computer programme that automates the job of retrieving information from web pages.

Amsterdam is not the only government to use web scraping. Increasingly, this technique is used to obtain data about topics ranging from consumer prices to jobs vacancy statistics and business data. Collecting data from the internet has advantages, but it also poses some challenges. It may be difficult to aggregate data coming from different websites, and data found online may not cover all aspects of a phenomenon you’re trying to understand (for example, not all job vacancies are published online). On a more practical level, your web scraper code may break when websites change.

In March 2017, Amsterdam reported that its weekly scrapes of major platforms like Airbnb required little maintenance. But last week, it sent a report to the city council describing how Airbnb has been making changes to its website - perhaps in an attempt to frustrate efforts to collect information about its business practices. Initially, Amsterdam’s digital surveillance department succesfully updated its scraper, but following new changes to the Airbnb website since May 2018, Amsterdam now appears to have given up on scraping Airbnb.

This made me curious about the technical characteristics of the Airbnb website. Here are some observations, based on an (admittedly superficial) examination:

  • The initial download of a web page isn’t the final version: after downloading, the contents of the page are dynamically altered using Javascript. For some purposes like navigating search results, you may prefer the final version of the page, which you can get using Selenium. Selenium would especially come in handy for interacting with the calendar to get availability and price information, which seems to be rather tricky.
  • Some details on listings only appear to be available in the Javascript code. You can find them using patterns like '\"lat\"\:(.*?),\"lng\"\:(.*?),'
  • Airbnb uses NGINX to control access to its website. If you request too many pages too fast, you’ll hit a rate limit and get an error page. I guess it should be possible to avoid the rate limit by adding pauses to your programme, but it may take quite some time to figure out how often and how long they should be.

While it appears that barriers to scraping the Airbnb website may be surmountable, it’s quite possible that I underestimate what this would take. If you’d actually build a scraper and would use it to frequently collect information about all local listings, all kinds of new problems might arise.

Meanwhile, other sources of Airbnb data are available. In a previous post, I used data made available by Tom Slee and by Murray Cox’ Inside Airbnb. Slee has since stopped updating his data, but Inside Airbnb is still active. As the Amsterdam Municipality notes in its report, Inside Airbnb has succesfully adapted its scraping technique each time Airbnb changed its website.

UPDATE 13 May - See comments on Twitter: Jens von Bergmann from Vancouver also has a scraper that is working. Following some requests, Tom Slee recently updated his scraper; his code is available on Github.

My first Python package

As a self-taught programmer, I sometimes feel a bit uneasy about the code I write. Sure, it may work, but there’s probably a more efficient and more elegant way to do it. These doubts notwithstanding, I’ve just published my first Python package: limepy.

Its purpose is simple: it helps you process and summarise LimeSurvey data. LimeSurvey is a survey tool, somewhat similar to Surveymonkey. It’s different in that it’s open source, and probably more versatile.

If you download survey data as a csv, the answers to question types such as multiple choice questions or blocks of questions (‘arrays’) will be spread out over multiple columns. One task of limepy is to make sure all the data for a specific item end up in one table.

Limepy will also help you with a number of other tasks, like downloading survey data, creating a codebook, printing answers to open-ended questions and printing the answers of an individual respondent.

Find the package on Github and PyPI. Install with pip install limepy. Feedback welcome.

Converting Election Markup Language (EML) to csv

Note that the map above isn’t really a good illustration here because I used a different data source to create it.

Getting results of Dutch elections at the municipality level can be complicated, but what if you want to dig a little deeper and look at results per polling station? Or even per candidate, per polling station? For elections since 2009, that information is available from the data portal of the Dutch government.

Challenges

The data is in Election Markup Language, an international standard for election data. I didn’t know that format and processing the data posed a bit of a challenge. I couldn’t find a simple explanation of the data structure, and the Electoral Board states that it doesn’t provide support on the format.

For example, how do you connect a candidate ID to their name and other details? I think you need to identify the Kieskring (district) by the contest name of the results file. Then, find the candidate list for the Kieskring and look up the candidate’s details using their candidate ID and affiliation. But with municipal elections, you have to look up candidates in the city’s candidate list (which doesn’t seem to have a contest name).

Practical tips

If you plan to use the data, here are some practical tips:

  • Keep in mind that locations and names of polling stations may change between elections.
  • If you want to geocode the polling stations, the easiest way is to use the postcode, which is often added to the polling station name (only for recent elections). If the postcode is not available or if you need a more precise location, the lists of polling station names and locations provided by Open State (2017, 2018) may be of use. Use fuzzy matching to match on polling station name, or perhaps you could also match on postcode if available. Of course, such an approach is not entirely error-free.

Further, note that the data for the 2017 Lower House election is only available in EML format for some of the municipalities. I guess this has something to do with the fact that prior to the election, vulnerabilities had been discovered in software to count the votes, so they had to count the votes manually.

Python script

Here’s a Python script that converts EML files to csv. See caveats there.

UPDATE 23 February 2019 - improved version of the script here.

The orientation of Amsterdam’s streets

Eight days from now, Amsterdam will have a new metro line traversing the city from north to south. But what about the orientation of the city’s streets?

Geoff Boeing - who created a Python package for analysing street networks using data from OpenStreetMap - just published a series of polar histograms of American and ‘world’ cities. Amsterdam isn’t among them, but Boeing made his code available, so I used that to create charts for the largest cities in the Netherlands.

While the pattern isn’t nearly as monotonous as in most American cities, I’m still surprised how many streets in Amsterdam run from north to south or from east to west. The Hague has a strong diagonal orientation; Rotterdam doesn’t seem to have a dominant orientation and Utrecht is a bit in between.

With Boeing’s code, you can also do the analysis specifically for roads that are accessible to cyclists, but for Amsterdam that doesn’t make much difference since most roads are.

Discussion

15 July 2018 - There was some really interesting discussion on Twitter in response to my post from last Friday (I use Twitter names to refer to people; most sources are in Dutch).

Curved streets

Both Sanne and Egon Willighagen asked how the chart treats curved streets. I have to admit I hadn’t checked, but the docstring of the add_ege_bearings function explains that it calculates the compass bearing of edges from origin node to destination node, so that implies that streets are treated as if they were straight lines.

Is that a problem? Probably not for many US cities, for they seem to have few curved streets. As for Amsterdam: most people’s mental image of the city is probably dominated by the curved canals of the city centre. However, many neighbourhoods consist of grids of more or less straight streets. So perhaps curved streets have little impact on the analysis after all.

Length versus surface

Hans Wisbrun argues that the chart type is nice, but also deceptive. The number of streets is represented by the length of the wedges, but one may intuitively look at the surface, which increases with the square of the length. In a post from 2013 (based on a tip from Ionica Smeets), he used a chart by Florence Nightingale to discuss the problem.

Rogier Brussee agrees, but argues that a polar chart is still the right choice here, because what you want to show is the angle of streets.

In a more general sense, I think the charts are an exploratory tool that’ll give you an idea how street patterns differ between cities. If you really want to understand what the wedges represent, you’ll have to look at a map.

Beach ridges

That’s what Stephan Okhuijsen did. He noted that the chart for The Hague appears to reflect the orientation of the city’s coastline. Not quite, Christiaan Jacobs replied. The orientation of the city’s streets is not determined by the current coastline, but by the original beach ridges.

I don’t know much about geography (or about The Hague for that matter), but a bit of googling suggests Jacobs is right. See for example this map (from this detailed analysis of one of The Hague’s streets), with the old sand dunes shown in dark yellow.

See also links to previous similar work in this post by Nathan Yau (FlowingData).

How to use Python and Selenium for scraping election results

A while ago, I needed the results of last year’s Lower House election in the Netherlands, by municipality. Dutch election data is available from the website of the Kiesraad (Electoral Board). However, it doesn’t contain a table of results per municipality. You’ll have to collect this information from almost 400 different web pages. This calls for a webscraper.

The Kiesraad website is partly generated using javascript (I think) and therefore not easy to scrape. For this reason, this seemed like a perfect project to explore Selenium.

What’s Selenium? «Selenium automates browsers. That’s it!» Selenium is primarily a tool for testing web applications. However, as a tutorial by Thiago Marzagão explains, it can also be used for webscraping:

[S]ome websites don’t like to be webscraped. In these cases you may need to disguise your webscraping bot as a human being. Selenium is just the tool for that. Selenium is a webdriver: it takes control of your browser, which then does all the work.

Selenium can be used with Python. Instructions to install Selenium are here. You also have to download chromedriver or another driver; you may store it in /usr/local/bin/.

Once you have everything in place, this is how you launch the driver and load a page:

from selenium import webdriver
 
URL = 'https://www.verkiezingsuitslagen.nl/verkiezingen/detail/TK20170315'
 
browser = webdriver.Chrome()
browser.get(URL)

This will open a new browser window. You can use either xpath or css selectors to find elements and then interact with them. For example, find a dropdown menu, identify the options from the menu and select the second one:

XPATH_PROVINCES = '//*[@id="search"]/div/div[1]/div'
element = browser.find_element_by_xpath(XPATH_PROVINCES)
options = element.find_elements_by_tag_name('option')
options[1].click()

If you’d check the page source of the web page, you wouldn’t find the options of the dropdown menu; they’re added afterwards. With Selenium, you needn’t worry about that - it will load the options for you.

Well, actually, there’s a bit more to it: you can’t find and select the options until they’ve actually loaded. Likely, the options won’t be in place initially, so you’ll need to wait a bit and retry.

Selenium comes with functions that specify what it should wait for, and how long it should wait and retry before it throws an error. But this isn’t always straightforward, as Marzagão explains:

Deciding what elements to (explicitly) wait for, with what conditions, and for how long is a trial-and-error process. […] This is often a frustrating process and you’ll need patience. You think that you’ve covered all the possibilities and your code runs for an entire week and you are all happy and celebratory and then on day #8 the damn thing crashes. The servers went down for a millisecond or your Netflix streaming clogged your internet connection or whatnot. It happens.

I ran into pretty similar problems when I tried to scrape the Kiesraad website. I tried many variations of the built-in wait parameters, but without any success. In the end I decided to write a few custom functions for the purpose.

The example below looks up the options of a dropdown menu. As long as the number of options isn’t greater than 1 (the page initially loads with only one option, a dash, and other options are loaded subsequently), it will wait a few seconds and try again - until more options are found or until a maximum number of tries has been reached.

MAX_TRIES = 15
 
def count_options(xpath, browser):
 
    time.sleep(3)
    tries = 0
    while tries < MAX_TRIES:
 
        try:
            element = browser.find_element_by_xpath(xpath)
            count = len(element.find_elements_by_tag_name('option'))
            if count > 1:
                return count
        except:
            pass
 
        time.sleep(1)
        tries += 1
    return count

Here’s a script that will download and save the result pages of all cities for the March 2017 Lower House election, parse the html, and store the results as a csv file. Run it from a subfolder in your project folder.

UPDATE 23 February 2019 - improved version of the script here.

Notes

Dutch election results are provided by the Kiesraad as open data. In the past, the Kiesraad website used to provide a csv with the results of all the municipalities, but this option is no longer available. Alternatively, a download is available of datasets for each municipality, but at least for 2017, municipalities use different formats.

Scraping the Kiesraad website appears to be the only way to get uniform data per municipality.

Since I originally wrote the scraper, the Kiesraad website has been changed. As a result, it would now be possible to scrape the site in a much easier way, and there would be no need to use Selenium. The source code of the landing page for an election contains a dictionary with id numbers for all the municipalities. With those id numbers, you can create urls for their result pages. No clicking required.

Pages