champagne anarchist | armchair activist

How to do fuzzy matching in Python

Statistics Netherlands (CBS) has an interesting dataset containing data at the city, district and neighbourhood levels. However, some names of neighbourhoods have changed, specifically between 2010 and 2011 for Amsterdam. For example, Bijlmer-Centrum D, F en H was renamed Bijlmer-Centrum (D, F, H).

In some of those cases the neighbourhood codes have changed as well, and CBS doesn’t have conversion tables. So this is one of those cases where you need fuzzy string matching.

There’s a good Python library for that job: Fuzzywuzzy. It was developed by SeatGeek, a company that scrapes event data from a variety of websites and needed a way to figure out which titles refer to the same event, even if the names have typos and other inconsistencies.

Fuzzywuzzy will compare two strings and compute a score between 0 and 100 reflecting how similar they are. It can use different methods to calculate that score (e.g. fuzz.ratio(string_1, string_2) or fuzz.partial_ratio(string_1, string_2). Some of those methods are described in this article, which is worth a read.

Alternatively, you can take a string and have Fuzzywuzzy pick the best match(es) from a list of options (e.g., process.extract(string, list_of_strings, limit=3) or process.extractOne(string, list_of_strings)). Here, too, you could specify the method to calculate the score, but you may want to first try the default option (WRatio), which will figure out which method to use. The default option seems to work pretty well.

Here’s the code I used to match the 2010 CBS Amsterdam neighbourhood names to those for 2011:

import pandas as pd
from fuzzywuzzy import process
# Prepare data
colnames = ['name', 'level', 'code']
data_2010 = pd.read_excel('../data/Kerncijfers_wijken_e_131017211256.xlsx', skiprows=4)
data_2010.columns = colnames
data_2010 = data_2010[data_2010.level == 'Buurt']
names_2010 = data_2010['name']
data_2011 = pd.read_excel('../data/Kerncijfers_wijken_e_131017211359.xlsx', skiprows=4)
data_2011.columns = colnames
data_2011 = data_2011[data_2011.level == 'Buurt']
names_2011 = data_2011['name']
# Actual matching
recode = {}
for name in names_10:
    best_match = process.extractOne(name, names_11)
    if best_match[1] < 100:
        print(name, best_match)
    recode[name] = best_match[0]

It prints all matches with a score below 100 so you can inspect them in case there are any incorrect matches (with larger datasets this may not be feasible). With the process option I didn’t get any incorrect matches, but with fuzz.partial_ratio, IJplein en Vogelbuurt was matched with Vondelbuurt instead of Ijplein/Vogelbuurt.

PS In case you’re actually going to work with the local CBS data, you should know that Amsterdam’s neighbourhoods (buurten) were reclassified as districts (wijken) in 2016, when a more detailed set of neighbourhoods was introduced. You can translate 2015 neighbourhood codes to 2016 district codes:

def convert_code(x):
    x = 'WK' + x[2:]
    x = x[:6] + x[-2:]
    return x

Follow this blog:
Twitter (English) | Twitter (Nederlands) | RSS data blog (English) | RSS dirkmjk (Nederlands)