champagne anarchist | armchair activist


Not ditching R for Python just yet

As a result of the whole controversy over using Python vs R for statistical analysis and graphs, I thought I’d switch to Python. Mostly because I think it’s more practical to use the same language for different tasks, but also because it seems easier to make decent-looking graphs with Python (I’m sure some people will thoroughly disagree). And, of course, because googling for solutions using «Python» as a search term simply works better than searching for «R».

But now Brian Caffo, Roger Peng and Jeff Leek’s Data Science Specialization Course has started on Coursera and they use R. I guess I’ll have to postpone my decision.

About those weird Netflix genres

The hippest story on Twitter right now is how Alexis Madrigal of the Atlantic discovered the 76,897 genres Netflix uses to classify its movie offering. Some examples of these weirdly specific genres include Critically-acclaimed Cerebral Independent Films; Feel-good Movies starring Elvis Presley and Coming-of-age Animal Tales.

Madrigal explains how straightforward it is to navigate all the genre pages on the Netflix website by incrementing the id in the url. But then he mentions that he retrieved the genres using «an expensive piece of software called UBot Studio that lets you easily write scripts for automating things on the web». Surely a few lines of Python code could’ve done the job? In fact, I guess you could probably extract the subgenre structure and the genre elements - region, adjectives, time period etc - with nltk and regex.

Never mind that, though. Madrigal’s article is an interesting read. Here it is if you haven’t read it yet. And here’s a critique of Netflix’s algorithms by Felix Salmon of Reuters, who argues that its recommendations are no longer about quality but about offering more of the same. You watched one Dark Political Movie from the 1980s? Then we’ll show you some more Dark Political Movies from the 1980s.

Just 13% of my Linkedin connections use buzzwords

Linkedin recently released it’s newest analysis of overused buzzwords in members’ profiles. Of course, this is just a ploy to get you to volunteer more personal details («Update your profile today!»), but never mind that.

Just in case, I checked whether any of my connections engage in overusing buzzwords. Reassuringly, the majority can’t even be bothered to fill out «summaries» and «specialties» in the first place. Those who do, seldom use the top-ten buzzwords for the Netherlands, as the table below shows.

Term Percentage Term Percentage
Verantwoordelijk 0.0 Responsible 0.5
Strategisch 1.8 Strategic 3.2
Expert 3.7 Expert 3.7
Creatief 0.9 Creative 1.8
Innovatief 0.0 Innovative 1.4
Dynamisch 0.0 Dynamic 0.5
Gedreven 0.5 Motivated 0.9
Duurzaam 0.0 Sustainable 1.4
Effectief 0.0 Effective 0.5
Analytisch 0.9 Analytic 0.5

Only two Dutch buzzwords are used by more than one percent of my connections. Interestingly, their English equivalents are slightly more prevalent. 87% of my connections are completely buzzword-free. For what it’s worth, people who use buzzwords also tend to have more connections.

Full disclosure: I may have used the word «strategic» in my own profile.

Update 21 Dec - Don’t push it, Linkedin.

Incidentally, 436,567 people, that’s less than 0.17% of all the 259m Linkedin users. Not that impressive.


I used these scripts (in part adapted from Matthew Russell’s Mining the Social Web) to get the «summaries» and «specialties» of connections from the Linkedin Api and process them.

Script to look up the gender of Dutch first names

This script determines the gender of Dutch persons by looking up their first name in a database of the Meertens Institute. The database indicates how often the name occurred as a first name for men and women in 2010. If the name is used for women substantially more often than for men, the name will be interpreted as female – and vice versa.

The reason I wrote the script has to to with this article on how the performance of women professional road cyclists is improving. I wanted to check whether a similar trend is going on among amateur riders, more specifically, participants in the Gerrie Knetemann Classic (incidentally, the script would take Knetemann for a woman – it’s not foolproof). The results of the ride are available online, but pre-2012 editions lack information on the gender of participants. So that’s what the script was for.

Speed of participants in Knetemann Classic

The results of the analysis aren’t exactly clearcut. The number of women participants in the 150km ride varied from 36 to 46, or 5 to 8% of the participants whose gender could be determined (the percentage for 2013 was 6%). The (median) speed of women participants rose in 2013, and more so than for men, but this rather thin to speak of a trend.

Cycling: Garmin altimeter compared to elevation databases

During a very rainy ride in Scotland, my Garmin altimeter appeared to be off: on some of the steepest climbs it failed to register any gradient. Afterwards, I tried the «elevation correction» feature on the Garmin website, which generously added over 750m to the total ascent the device had measured. This was certainly more satisfying, but it left me wondering. Can the weather affect the Garmin altimeter? And how accurate is the recalculated ascent?

Garmin’s recalculation service works basically by looking up the gps locations of your ride in an elevation database. Strava offers a similar service. Below, I analyse the Garmin and Strava recalculations for a number of rides. Note that this is only an exploratory analysis and that no firm conclusions can be drawn on the basis of this rather small set of observations. That said, here are some preliminary conclusions:

  • If you want to boost your ego, let Garmin recalculate your ascent: chances are it will add (quite) a few metres. Strava’s recalculations tend to stay closer to the original measurement. When it does make changes, it frequently lowers the number of metres you’re supposed to have climbed, especially on relatively flat rides.
  • In theory, you’d expect weather changes to affect the ascent measured by the device, because the altimeter is basically a barometer. In practice, weather changes don’t seem to have much effect on the altimeter.
  • It appears plausible that heavy rain does in fact mess with the altimeter.

In the graphs below, the colour of the dots represents the region of the ride. Red dots represent the Ronde Hoep, a flat ride to the south of Amsterdam. Blue ones represent the Kopje van Bloemendaal (north, south), the closest thing to a climb near Amsterdam (it’s not high but quite steep). Green dots represent the central area of the country and include the Utrechtse Heuvelrug, Veluwezoom, Rijk van Nijmegen and Kreis Kleve (the latter in Germany).


By default, the graph above shows how much the Garmin recalculation differs from the ascent measured by the device (graphs may not show in older versions of Internet Explorer). The closer a dot is to the dashed line, the the closer the recalculated ascent is to the original measurement.

For rides shown on the left part of the graph, where the device measured less than 500m ascent, Garmin’s recalculation often adds about 50 to 100% or more. With higher ascents, the recalculated ascent is closer to the original measurement, although it still tends to add about 30 to 50%. The highest dot to the far right of the graph is the rainy ride in Scotland; here Garmin’s recalculation added over 35%.

With the selector above the graph, you can select the Strava recalculation. You’ll notice the scale on the y axis changes (and the dashed line moves up). Also, a few red dots enter the graph. These are rides along the Ronde Hoep, which is a flat ride. For these rides, Garmin’s recalculation added up to 750% to the ascent measured by the device; therefore these dots were initially outside the graph area.

The Strava recalculations are similar to the Garmin ones in that the correction is larger for relatively flat rides. Unlike Garmin, Strava lowers the ascent in these cases, often by 15 to 50%. For rides where the device measured a total ascent of over 500m, the Strava recalculation tends to be pretty close to the original measurement.

Weather changes

It has been suggested that changes in the weather may affect elevation measurements. This makes sense, since the Garmin altimeter is in fact a barometer. Wikipedia says that pressure decreases by about 1.2 kPa for every 100 metres in ascent. In other words, if net atmospheric pressure would rise by 6 mBar, this would cause the device to underestimate total ascent by about 50 metres, so the theoretical effect wouldn’t seem to be huge.

The graph above shows how much recalculations differed from the original measurement, with change in pressure on the x axis. Note that the effect of recalculations is here in metres, not percent. I tried different combinations of pressure measures and recalculations and in only one case - the Garmin recalculation shown above - the correlation was statistically significant (and the regression line much steeper than the Wikipedia data would suggest), so this is not exactly firm evidence for an effect of weather change on elevation measurement.

Heavy rain

It has been suggested that heavy rain may block the sensor hole and thus affect elevation measurement. This may sound a bit weird, but I have seen the device stop registering any ascent during very heavy rain. Among the rides considered here, there are two that saw really heavy rainfall (the Scottish ride and a ride in Utrechtse Heuvelrug on 27 July). These do show some of the largest corrections, especially in the Strava recalculation. So it does seem plausible that rain does in fact affect elevation measurement.

In the spirit of true pseudoscientific enquiry, I tried to replicate the effect of heavy rain by squirting water from my bidon onto the device during a ride in Utrechtse Heuvelrug. This didn’t yield straightforward results. At first, the device registered implausibly steep gradients and it turned out it had interpreted the hump between Maarn and Doorn as 115m high, more than twice its real height. About halfway, unpredicted rain started to fall, mocking my experiment. Strava recalculation didn’t change much to the total ascent but it did correct the height of the bit between Maarn and Doorn, so it must have added some 50+ metres elsewhere. Be it as it may, the «experiment» does seem to confirm that water can do things to the altimeter.


I took total ascent data measured by my Garmin Edge 800 and obtained a recalculation from the Garmin Connect and Strava websites. Subsequently, I looked up weather data from Weather Underground (as an armchair activist I do appreciate their slightly subversive name). Weather Underground offers historical weather data by location, with numerous observations per day. I wrote a Python script that looks up the data for the day and location of the ride and then selects the observations that roughly overlap with the duration of the ride. There turned out to be two limitations to the data. First, it appears that only data at the national level are available (the Scottish ride yielded data for London and all Dutch ones data for Amsterdam). Second, for the day / location combinations I tried there was no time-specific data for precipitation available, only for the entire day.

Because of these limitations, I also took an alternative approach, looking up data from the Royal Netherlands Meteorological Institute KNMI. This did yield more fine-grained data, although obviously limited to the Netherlands. In the end it turned out that it didn’t make much difference for the analysis whether KNMI or Weather Underground data is used. Code from the scripts I used for looking up weather data is here.

I tested quite a few correlations so a couple of ‘false positives’ may be expected. I didn’t statistically correct for this. Instead, I took a rather pragmatic approach: I’m cautious when there’s simply a significant correlation between two phenomena but I’m more confident when there’s a pattern to the correlations (e.g., Garmin and Strava recalculations are correlated in a similar way to another variable).