Python

Cycling: Garmin altimeter compared to elevation databases

During a very rainy ride in Scotland, my Garmin altimeter appeared to be off: on some of the steepest climbs it failed to register any gradient. Afterwards, I tried the «elevation correction» feature on the Garmin website, which generously added over 750m to the total ascent the device had measured. This was certainly more satisfying, but it left me wondering. Can the weather affect the Garmin altimeter? And how accurate is the recalculated ascent?

Garmin’s recalculation service works basically by looking up the gps locations of your ride in an elevation database. Strava offers a similar service. Below, I analyse the Garmin and Strava recalculations for a number of rides. Note that this is only an exploratory analysis and that no firm conclusions can be drawn on the basis of this rather small set of observations. That said, here are some preliminary conclusions:

  • If you want to boost your ego, let Garmin recalculate your ascent: chances are it will add (quite) a few metres. Strava’s recalculations tend to stay closer to the original measurement. When it does make changes, it frequently lowers the number of metres you’re supposed to have climbed, especially on relatively flat rides.
  • In theory, you’d expect weather changes to affect the ascent measured by the device, because the altimeter is basically a barometer. In practice, weather changes don’t seem to have much effect on the altimeter.
  • It appears plausible that heavy rain does in fact mess with the altimeter.

In the graphs below, the colour of the dots represents the region of the ride. Red dots represent the Ronde Hoep, a flat ride to the south of Amsterdam. Blue ones represent the Kopje van Bloemendaal (north, south), the closest thing to a climb near Amsterdam (it’s not high but quite steep). Green dots represent the central area of the country and include the Utrechtse Heuvelrug, Veluwezoom, Rijk van Nijmegen and Kreis Kleve (the latter in Germany).

General

By default, the graph above shows how much the Garmin recalculation differs from the ascent measured by the device (graphs may not show in older versions of Internet Explorer). The closer a dot is to the dashed line, the the closer the recalculated ascent is to the original measurement.

For rides shown on the left part of the graph, where the device measured less than 500m ascent, Garmin’s recalculation often adds about 50 to 100% or more. With higher ascents, the recalculated ascent is closer to the original measurement, although it still tends to add about 30 to 50%. The highest dot to the far right of the graph is the rainy ride in Scotland; here Garmin’s recalculation added over 35%.

With the selector above the graph, you can select the Strava recalculation. You’ll notice the scale on the y axis changes (and the dashed line moves up). Also, a few red dots enter the graph. These are rides along the Ronde Hoep, which is a flat ride. For these rides, Garmin’s recalculation added up to 750% to the ascent measured by the device; therefore these dots were initially outside the graph area.

The Strava recalculations are similar to the Garmin ones in that the correction is larger for relatively flat rides. Unlike Garmin, Strava lowers the ascent in these cases, often by 15 to 50%. For rides where the device measured a total ascent of over 500m, the Strava recalculation tends to be pretty close to the original measurement.

Weather changes

It has been suggested that changes in the weather may affect elevation measurements. This makes sense, since the Garmin altimeter is in fact a barometer. Wikipedia says that pressure decreases by about 1.2 kPa for every 100 metres in ascent. In other words, if net atmospheric pressure would rise by 6 mBar, this would cause the device to underestimate total ascent by about 50 metres, so the theoretical effect wouldn’t seem to be huge.

The graph above shows how much recalculations differed from the original measurement, with change in pressure on the x axis. Note that the effect of recalculations is here in metres, not percent. I tried different combinations of pressure measures and recalculations and in only one case - the Garmin recalculation shown above - the correlation was statistically significant (and the regression line much steeper than the Wikipedia data would suggest), so this is not exactly firm evidence for an effect of weather change on elevation measurement.

Heavy rain

It has been suggested that heavy rain may block the sensor hole and thus affect elevation measurement. This may sound a bit weird, but I have seen the device stop registering any ascent during very heavy rain. Among the rides considered here, there are two that saw really heavy rainfall (the Scottish ride and a ride in Utrechtse Heuvelrug on 27 July). These do show some of the largest corrections, especially in the Strava recalculation. So it does seem plausible that rain does in fact affect elevation measurement.

In the spirit of true pseudoscientific enquiry, I tried to replicate the effect of heavy rain by squirting water from my bidon onto the device during a ride in Utrechtse Heuvelrug. This didn’t yield straightforward results. At first, the device registered implausibly steep gradients and it turned out it had interpreted the hump between Maarn and Doorn as 115m high, more than twice its real height. About halfway, unpredicted rain started to fall, mocking my experiment. Strava recalculation didn’t change much to the total ascent but it did correct the height of the bit between Maarn and Doorn, so it must have added some 50+ metres elsewhere. Be it as it may, the «experiment» does seem to confirm that water can do things to the altimeter.

Method

I took total ascent data measured by my Garmin Edge 800 and obtained a recalculation from the Garmin Connect and Strava websites. Subsequently, I looked up weather data from Weather Underground (as an armchair activist I do appreciate their slightly subversive name). Weather Underground offers historical weather data by location, with numerous observations per day. I wrote a Python script that looks up the data for the day and location of the ride and then selects the observations that roughly overlap with the duration of the ride. There turned out to be two limitations to the data. First, it appears that only data at the national level are available (the Scottish ride yielded data for London and all Dutch ones data for Amsterdam). Second, for the day / location combinations I tried there was no time-specific data for precipitation available, only for the entire day.

Because of these limitations, I also took an alternative approach, looking up data from the Royal Netherlands Meteorological Institute KNMI. This did yield more fine-grained data, although obviously limited to the Netherlands. In the end it turned out that it didn’t make much difference for the analysis whether KNMI or Weather Underground data is used. Code from the scripts I used for looking up weather data is here.

I tested quite a few correlations so a couple of ‘false positives’ may be expected. I didn’t statistically correct for this. Instead, I took a rather pragmatic approach: I’m cautious when there’s simply a significant correlation between two phenomena but I’m more confident when there’s a pattern to the correlations (e.g., Garmin and Strava recalculations are correlated in a similar way to another variable).

Counting unofficial retweets

One way of finding out who’s influential on Twitter is to count how often people are retweeted. I did so when analysing the Twitter discussion on the election of the new president of the Dutch trade union confederation FNV.

I counted both ‘official’ retweets – retweets acknowledged by Twitter – and ‘unofficial’ retweets. Unofficial retweets may have been generated by unofficial Twitter apps (I think) or users may have typed them manually. They may have the pattern RT@username:text (which is also the pattern of official retweets), the pattern "@username:text", or the pattern text via @username (this pattern wasn’t in my original analysis). Perhaps there are more flavours around that I don’t know of.

When looking for background information, I came across a comment by an SEO analyst explaining why they don’t count unofficial retweets:

To try to count non-official RTs is a messy business, as it would require a lot more Twitter API calls for possibly negligible benefit. Why negligible? We make an assumption that non-official RTs correlate strongly with official RTs. We can then use the latter as a proxy for the former. This assumption may not be true, of course. That is, by not using non-official RTs, we may ignore pockets of users who generate many more unofficial RTs... perhaps those who ask a question, or invite a response? (comment by Pete Bray on this article)

Below is some information on how often users in my FNV sample were retweeted within that same sample.

Prevalence of types of retweets
Official retweet RT@username:text "@username:text"text via @username
Sum 3,544 113 9860

At least within this sample, unofficial retweets are not very common: they make up about seven percent of all retweets. And here’s some information on how official and unofficial retweets are correlated:

Correlations between types of retweets (spearman)
Official retweet RT@username:text "@username:text"text via @username
Official retweet 1 0.28 0.250.13
RT@username:text 0.28 1 0.140.10
"@username:text" 0.25 0.14 10.13
text via @username0.130.100.131

Users who generate more official retweets also tend to generate more unofficial retweets, but the correlation is not particularly strong. So based on this sample, it would seem conceivable that there are indeed ‘pockets of users who generate many more unofficial RTs’ – as suggested by Bray.

Method

The sample contains close to 11,000 tweets containing the string FNV, collected between 26 April and 16 May. For background see this article; the analysis of retweets in the FNV debate is here. The code I used for the analysis above is here.

If you have a sample of tweets and you want to know how often users in that sample have been retweeted, you can only find that out for retweets that are also in the same sample. In my case that wasn’t a problem, for I was interested in who was influential within a specific discussion. However, if you’d be interested in constructing a general measure of how influential twitter users are, you’d probably need a pretty large sample of tweets.

The messiest type of retweet is probably text via @username. Often these aren’t real retweets but added by services like sharethis or AddThis or by news websites that have their own share service (I only included users if they were already in the sample, i.e. had tweeted texts containing FNV; this eliminates sharethis and AddThis tweets). I looked for the pattern via @ followed by any number of non-whitespace characters at the end of the line, or followed by any number of non-whitespace characters before the first whitespace. This method may not be 100% accurate, but I think it’ll do. The regex patterns used to find the different types of retweets are in the code.

Because the retweet counts are not normally distributed (many have a value of 0) I used spearman rank correlation; pearson’s correlation would have yielded stronger - but still not particularly strong - correlations of up to 0.5.

Wie wordt volgens Twitter de nieuwe FNV-voorzitter

Aantal tweets waarin kandidaten worden genoemd


Volgens Amerikaans onderzoek kan je de uitkomst van verkiezingen voorspellen door simpelweg te tellen hoe vaak de namen van de kandidaten worden genoemd op Twitter. Zou je op deze manier ook kunnen voorspellen wie de nieuwe voorzitter wordt van de FNV?

Ik heb vanaf afgelopen vrijdag de tweets verzameld waarin de term ‘FNV’ voorkomt, tot nog toe zijn dat er ruim 2.500. In deze tweets wordt Ton Heerts 204 keer genoemd en Corrie van Brenk 146 keer (tweets waarin ze allebei worden genoemd laat ik buiten beschouwing). Kortom, als Twitter een goede graadmeter is (daarover valt natuurlijk te discussiëren) dan wordt de strijd spannender dan het aanvankelijk misschien leek.

De grafiek hierboven laat de resultaten zien voor de dagen waarvoor volledige gegevens beschikbaar zijn. Er was vanaf zaterdag aandacht voor Van Brenk (vanwege deze factcheck). Op zondag werd Heerts genoemd omdat hij te gast was in het tv-programma van Eva Jinek. Op 1 mei werden de kandidaten officieel bekendgemaakt en gingen ze met elkaar in debat.

Update - Bijgewerkt tot en met 13 mei, de laatste dag waarop gestemd kon worden. In totaal is Van Brenk 497 keer genoemd en Heerts 631. Inmiddels is bekend dat Heerts de verkiezing heeft gewonnen (uiteraard is daarmee nog niet gezegd dat de methode hout snijdt: om daar iets zinnigs over te kunnen zeggen zou je een flink aantal voorspellingen moeten kunnen beoordelen).
In de grafiek zijn onder meer de volgende invloeden zichtbaar: Factcheck bevestigt uitspraak Van Brenk (vanaf 27 april); Heerts bij Eva Jinek (28 april); officiële bekendmaking kandidaten (1 mei); debat bij Buitenhof (5 mei); belastingaffaire waarover Van Brenk’s Abvakabo FNV al aan de bel had getrokken (6 mei); interview Van Brenk op Nu.nl (9 mei); radio-optreden Van Brenk (10 mei); Heerts bij presentatie Techniekpact (13 mei); peiling EenVandaag voorspelt dat Heerts wint (13 mei).
De grafiek is mogelijk niet zichtbaar in oude versies van Internet Explorer.

Methode

Tweets heb ik verzameld via de Twitter Streaming API, op de manier die hier wordt beschreven. Daarbij heb ik gefilterd op de zoekterm ‘fnv’. Ik heb de gegevens met Python bewerkt en met R geanalyseerd (de code staat op Github). De grafiek is gemaakt met D3.js.
Ik heb nog gekeken naar hoe invloedrijk de twitteraars zijn (hoeveel volgers; hoe vaak opgenomen in lijsten) en naar hun achtergrond (noemen ze in hun profiel bijvoorbeeld de FNV). Het belangrijkste wat dit opleverde is dat twitteraars die Van Brenk noemen vaker ‘abva’ of ‘akf’ in hun profiel noemen - niet verassend aangezien Van Brenk momenteel voorzitter is van Abvakabo FNV.
Het Amerikaanse onderzoek naar Twitter als voorspeller van verkiezingsuitslagen is uitgevoerd door DiGrazia en anderen en is hier te vinden. Enkele opmerkingen over hun onderzoek:

  • Het klopt natuurlijk dat twitteraars maar een klein deel van de bevolking vormen en dat ze niet representatief zijn voor de hele bevolking. Waarschijnlijk wordt het beeld op Twitter vooral bepaald door een kleine, actieve incrowd. Ook klopt het dat een tweet waarin een kandidaat wordt genoemd niet altijd positief is; soms wordt er juist kritiek geuit. Ondanks dit alles bleek uit het onderzoek van DiGrazia e.a. dat het aantal keer dat een kandidaat op Twitter wordt genoemd een consistente voorspeller vormt van verkiezingsuitslagen. Wellicht vormt het aantal vermeldingen op twitter een indicator voor iets anders, bijvoorbeeld media-aandacht of hoe actief er campagne wordt gevoerd voor een kandidaat.
  • De methode biedt uiteraard geen zekerheid over wie er wint. Het kan voorkomen dat een kandidaat bijna 100% van de twittervermeldingen krijgt en toch verliest (althans dat suggereren de scatterplots die DiGrazia e.a. laten zien).
  • Het is onduidelijk in hoeverre de conclusies van het Amerikaanse onderzoek naar andere situaties kunnen worden gegeneraliseerd. Het is daarom wel een beetje een gok om met deze methode te voorspellen wie de nieuwe voorzitter van de FNV wordt.

Can Twitter predict the new Dutch trade union president

Number of tweets in which candidates are mentioned


According to an American study, you can predict the outcome of elections by simply counting how often the names of the candidates are mentioned on Twitter. Members of the Dutch union confederation FNV are currently voting for their new president (it has been claimed this is the first time in the world union members get to directly elect their confederation president). Would it be possible to predict who will be the new FNV president using Twitter?

Since last Friday, I’ve been collecting the tweets containing the term ‘FNV’; so far, there are over 2,500. In those tweets, the incumbent Ton Heerts is mentioned 204 times, whereas his challenger Corrie van Brenk is mentioned 146 times. In short, if Twitter is a good predictor (which of course is a matter for debate), the contest is tighter than one might have expected.

The graph above shows the results for the days for which complete data is available. On Saturday, Van Brenk got some attention because something she had said had been fact checked (and found to be correct). On Sunday, Heerts was mentioned because he appeared on a TV show hosted by Eva Jinek. On 1 May, it was officially announced who the candidates are and they had a debate.

Update - Updated to include 13 May, the final voting day. In sum, Van Brenk was mentioned 497 times and Heerts 631. It has since been announced that Heerts has won the election (of course, this doesn’t necessarily mean that the method is sound; in order to make such claims one would need to evaluate a fair amount of predictions).
Influences reflected in the graph include: Factcheck confirms Van Brenk statement (27 April); Heerts in Eva Jinek TV show (28 April); candidates officially announced (1 May); debate in Buitenhof TV show (5 May); problems at tax authorities that Van Brenk’s Abvakabo FNV had warned about (6 May); Van Brenk interview at Nu.nl (9 May); Van Brenk in radio show (10 May); Heerts at presentation of initiative to train technical staff (13 May); EenVandaag TV show poll predicts Heerts will win (13 May).
The graph may not be visible in older versions of Internet Explorer.

Method

I collected tweets using the Twitter Streaming API (the ‘firehose’), in the way described here. I prepared the data using Python and analysed it using R (find the code on Github). The graph was created with D3.js.
I looked into how influential twitterers are (how many followers, how often listed) and into their backgrounds (e.g., do they mention ‘fnv’ in their profile). The most important finding is that twitterers who mention Van Brenk, more often mention ‘abva’ or ‘akf’ in their profile - not surprising since Van Brenk is currently president of Abvakabo FNV, the public sector union affiliated to the FNV.
The American study on Twitter as a predictor of election outcomes was done by DiGrazia c.s. and can be found here. Some remarks on their study:

  • Yes, twitterers are only a small part of the population and no, they’re not representative of the entire population. Likely, Twitter is dominated by a small, active incrowd. It’s also correct that tweets mentioning a candidate need not endorse them; they may as well be critical. Despite all this, DiGrazia c.s. found that mentions on Twitter consistently predict election outcomes. Perhaps they are an indicator of something else - e.g. media attention or how actively people are campaigning for a candidate.
  • Of course, this method doesn’t provide any certainty on who will win. It’s possible for a candidate to get almost 100% of the tweet share and still lose (at least, that’s what the scatterplots of DiGrazia c.s. suggest).
  • It’s unclear to what extent the conclusions of the American study can be generalised to other situations. It’s therefore a bit of a gamble to use this method to predict who will be the next president of the FNV.

Clint Eastwood won the LAUGHTER contest

I’m not sure what this says about the audiences at US national party conventions, but among a sample of 16 speeches, Clint Eastwood’s was the one that elicited the most laughter (Rand Paul’s got most applause). Among the presidential candidates, Obama won the applause contest, while being about equally funny as Romney.

For the second lesson of Alberto Cairo’s online data visualisation course, we were asked to comment on and perhaps redesign this convention word count tool created by the NYT. I wouldn’t be able to do such a cool interactive thing myself (I got stuck in the jQuery part of Codeyear), so I decided to focus on differences between individual speeches instead.

First I needed the transcripts – preferably from one single source to make sure the transcription had been done in a uniform way. As far as I could find, Fox News has the largest collection of transcripts online. As a result, Republican speakers are overrepresented in my sample, but that’s ok because the key Democratic speakers are included as well.

I wrote a script to do the word count (I’m sure this could be done in a more elegant way). One problem with my script was that html-code got included in the total word count. I thought I could correct this by subtracting 1,000 from each word count, but this didn’t work so well, so I had to make some corrections.

This assignment was a bit of a rush job so I hope I didn’t make any stupid mistakes.

Pages