champagne anarchist | armchair activist


Just 13% of my Linkedin connections use buzzwords

Linkedin recently released it’s newest analysis of overused buzzwords in members’ profiles. Of course, this is just a ploy to get you to volunteer more personal details («Update your profile today!»), but never mind that.

Just in case, I checked whether any of my connections engage in overusing buzzwords. Reassuringly, the majority can’t even be bothered to fill out «summaries» and «specialties» in the first place. Those who do, seldom use the top-ten buzzwords for the Netherlands, as the table below shows.

Term Percentage Term Percentage
Verantwoordelijk 0.0 Responsible 0.5
Strategisch 1.8 Strategic 3.2
Expert 3.7 Expert 3.7
Creatief 0.9 Creative 1.8
Innovatief 0.0 Innovative 1.4
Dynamisch 0.0 Dynamic 0.5
Gedreven 0.5 Motivated 0.9
Duurzaam 0.0 Sustainable 1.4
Effectief 0.0 Effective 0.5
Analytisch 0.9 Analytic 0.5

Only two Dutch buzzwords are used by more than one percent of my connections. Interestingly, their English equivalents are slightly more prevalent. 87% of my connections are completely buzzword-free. For what it’s worth, people who use buzzwords also tend to have more connections.

Full disclosure: I may have used the word «strategic» in my own profile.

Update 21 Dec - Don’t push it, Linkedin.

Incidentally, 436,567 people, that’s less than 0.17% of all the 259m Linkedin users. Not that impressive.


I used these scripts (in part adapted from Matthew Russell’s Mining the Social Web) to get the «summaries» and «specialties» of connections from the Linkedin Api and process them.

Script to look up the gender of Dutch first names

This script determines the gender of Dutch persons by looking up their first name in a database of the Meertens Institute. The database indicates how often the name occurred as a first name for men and women in 2010. If the name is used for women substantially more often than for men, the name will be interpreted as female – and vice versa.

The reason I wrote the script has to to with this article on how the performance of women professional road cyclists is improving. I wanted to check whether a similar trend is going on among amateur riders, more specifically, participants in the Gerrie Knetemann Classic (incidentally, the script would take Knetemann for a woman – it’s not foolproof). The results of the ride are available online, but pre-2012 editions lack information on the gender of participants. So that’s what the script was for.

Speed of participants in Knetemann Classic

The results of the analysis aren’t exactly clearcut. The number of women participants in the 150km ride varied from 36 to 46, or 5 to 8% of the participants whose gender could be determined (the percentage for 2013 was 6%). The (median) speed of women participants rose in 2013, and more so than for men, but this rather thin to speak of a trend.

Cycling: Garmin altimeter compared to elevation databases

During a very rainy ride in Scotland, my Garmin altimeter appeared to be off: on some of the steepest climbs it failed to register any gradient. Afterwards, I tried the «elevation correction» feature on the Garmin website, which generously added over 750m to the total ascent the device had measured. This was certainly more satisfying, but it left me wondering. Can the weather affect the Garmin altimeter? And how accurate is the recalculated ascent?

Garmin’s recalculation service works basically by looking up the gps locations of your ride in an elevation database. Strava offers a similar service. Below, I analyse the Garmin and Strava recalculations for a number of rides. Note that this is only an exploratory analysis and that no firm conclusions can be drawn on the basis of this rather small set of observations. That said, here are some preliminary conclusions:

  • If you want to boost your ego, let Garmin recalculate your ascent: chances are it will add (quite) a few metres. Strava’s recalculations tend to stay closer to the original measurement. When it does make changes, it frequently lowers the number of metres you’re supposed to have climbed, especially on relatively flat rides.
  • In theory, you’d expect weather changes to affect the ascent measured by the device, because the altimeter is basically a barometer. In practice, weather changes don’t seem to have much effect on the altimeter.
  • It appears plausible that heavy rain does in fact mess with the altimeter.

In the graphs below, the colour of the dots represents the region of the ride. Red dots represent the Ronde Hoep, a flat ride to the south of Amsterdam. Blue ones represent the Kopje van Bloemendaal (north, south), the closest thing to a climb near Amsterdam (it’s not high but quite steep). Green dots represent the central area of the country and include the Utrechtse Heuvelrug, Veluwezoom, Rijk van Nijmegen and Kreis Kleve (the latter in Germany).


By default, the graph above shows how much the Garmin recalculation differs from the ascent measured by the device (graphs may not show in older versions of Internet Explorer). The closer a dot is to the dashed line, the the closer the recalculated ascent is to the original measurement.

For rides shown on the left part of the graph, where the device measured less than 500m ascent, Garmin’s recalculation often adds about 50 to 100% or more. With higher ascents, the recalculated ascent is closer to the original measurement, although it still tends to add about 30 to 50%. The highest dot to the far right of the graph is the rainy ride in Scotland; here Garmin’s recalculation added over 35%.

With the selector above the graph, you can select the Strava recalculation. You’ll notice the scale on the y axis changes (and the dashed line moves up). Also, a few red dots enter the graph. These are rides along the Ronde Hoep, which is a flat ride. For these rides, Garmin’s recalculation added up to 750% to the ascent measured by the device; therefore these dots were initially outside the graph area.

The Strava recalculations are similar to the Garmin ones in that the correction is larger for relatively flat rides. Unlike Garmin, Strava lowers the ascent in these cases, often by 15 to 50%. For rides where the device measured a total ascent of over 500m, the Strava recalculation tends to be pretty close to the original measurement.

Weather changes

It has been suggested that changes in the weather may affect elevation measurements. This makes sense, since the Garmin altimeter is in fact a barometer. Wikipedia says that pressure decreases by about 1.2 kPa for every 100 metres in ascent. In other words, if net atmospheric pressure would rise by 6 mBar, this would cause the device to underestimate total ascent by about 50 metres, so the theoretical effect wouldn’t seem to be huge.

The graph above shows how much recalculations differed from the original measurement, with change in pressure on the x axis. Note that the effect of recalculations is here in metres, not percent. I tried different combinations of pressure measures and recalculations and in only one case - the Garmin recalculation shown above - the correlation was statistically significant (and the regression line much steeper than the Wikipedia data would suggest), so this is not exactly firm evidence for an effect of weather change on elevation measurement.

Heavy rain

It has been suggested that heavy rain may block the sensor hole and thus affect elevation measurement. This may sound a bit weird, but I have seen the device stop registering any ascent during very heavy rain. Among the rides considered here, there are two that saw really heavy rainfall (the Scottish ride and a ride in Utrechtse Heuvelrug on 27 July). These do show some of the largest corrections, especially in the Strava recalculation. So it does seem plausible that rain does in fact affect elevation measurement.

In the spirit of true pseudoscientific enquiry, I tried to replicate the effect of heavy rain by squirting water from my bidon onto the device during a ride in Utrechtse Heuvelrug. This didn’t yield straightforward results. At first, the device registered implausibly steep gradients and it turned out it had interpreted the hump between Maarn and Doorn as 115m high, more than twice its real height. About halfway, unpredicted rain started to fall, mocking my experiment. Strava recalculation didn’t change much to the total ascent but it did correct the height of the bit between Maarn and Doorn, so it must have added some 50+ metres elsewhere. Be it as it may, the «experiment» does seem to confirm that water can do things to the altimeter.


I took total ascent data measured by my Garmin Edge 800 and obtained a recalculation from the Garmin Connect and Strava websites. Subsequently, I looked up weather data from Weather Underground (as an armchair activist I do appreciate their slightly subversive name). Weather Underground offers historical weather data by location, with numerous observations per day. I wrote a Python script that looks up the data for the day and location of the ride and then selects the observations that roughly overlap with the duration of the ride. There turned out to be two limitations to the data. First, it appears that only data at the national level are available (the Scottish ride yielded data for London and all Dutch ones data for Amsterdam). Second, for the day / location combinations I tried there was no time-specific data for precipitation available, only for the entire day.

Because of these limitations, I also took an alternative approach, looking up data from the Royal Netherlands Meteorological Institute KNMI. This did yield more fine-grained data, although obviously limited to the Netherlands. In the end it turned out that it didn’t make much difference for the analysis whether KNMI or Weather Underground data is used. Code from the scripts I used for looking up weather data is here.

I tested quite a few correlations so a couple of ‘false positives’ may be expected. I didn’t statistically correct for this. Instead, I took a rather pragmatic approach: I’m cautious when there’s simply a significant correlation between two phenomena but I’m more confident when there’s a pattern to the correlations (e.g., Garmin and Strava recalculations are correlated in a similar way to another variable).

Counting unofficial retweets

One way of finding out who’s influential on Twitter is to count how often people are retweeted. I did so when analysing the Twitter discussion on the election of the new president of the Dutch trade union confederation FNV.

I counted both ‘official’ retweets – retweets acknowledged by Twitter – and ‘unofficial’ retweets. Unofficial retweets may have been generated by unofficial Twitter apps (I think) or users may have typed them manually. They may have the pattern RT@username:text (which is also the pattern of official retweets), the pattern "@username:text", or the pattern text via @username (this pattern wasn’t in my original analysis). Perhaps there are more flavours around that I don’t know of.

When looking for background information, I came across a comment by an SEO analyst explaining why they don’t count unofficial retweets:

To try to count non-official RTs is a messy business, as it would require a lot more Twitter API calls for possibly negligible benefit. Why negligible? We make an assumption that non-official RTs correlate strongly with official RTs. We can then use the latter as a proxy for the former. This assumption may not be true, of course. That is, by not using non-official RTs, we may ignore pockets of users who generate many more unofficial RTs... perhaps those who ask a question, or invite a response? (comment by Pete Bray on this article)

Below is some information on how often users in my FNV sample were retweeted within that same sample.

Prevalence of types of retweets
Official retweet RT@username:text "@username:text"text via @username
Sum 3,544 113 9860

At least within this sample, unofficial retweets are not very common: they make up about seven percent of all retweets. And here’s some information on how official and unofficial retweets are correlated:

Correlations between types of retweets (spearman)
Official retweet RT@username:text "@username:text"text via @username
Official retweet 1 0.28 0.250.13
RT@username:text 0.28 1 0.140.10
"@username:text" 0.25 0.14 10.13
text via @username0.130.100.131

Users who generate more official retweets also tend to generate more unofficial retweets, but the correlation is not particularly strong. So based on this sample, it would seem conceivable that there are indeed ‘pockets of users who generate many more unofficial RTs’ – as suggested by Bray.


The sample contains close to 11,000 tweets containing the string FNV, collected between 26 April and 16 May. For background see this article; the analysis of retweets in the FNV debate is here. The code I used for the analysis above is here.

If you have a sample of tweets and you want to know how often users in that sample have been retweeted, you can only find that out for retweets that are also in the same sample. In my case that wasn’t a problem, for I was interested in who was influential within a specific discussion. However, if you’d be interested in constructing a general measure of how influential twitter users are, you’d probably need a pretty large sample of tweets.

The messiest type of retweet is probably text via @username. Often these aren’t real retweets but added by services like sharethis or AddThis or by news websites that have their own share service (I only included users if they were already in the sample, i.e. had tweeted texts containing FNV; this eliminates sharethis and AddThis tweets). I looked for the pattern via @ followed by any number of non-whitespace characters at the end of the line, or followed by any number of non-whitespace characters before the first whitespace. This method may not be 100% accurate, but I think it’ll do. The regex patterns used to find the different types of retweets are in the code.

Because the retweet counts are not normally distributed (many have a value of 0) I used spearman rank correlation; pearson’s correlation would have yielded stronger - but still not particularly strong - correlations of up to 0.5.

Wie wordt volgens Twitter de nieuwe FNV-voorzitter

Aantal tweets waarin kandidaten worden genoemd

Volgens Amerikaans onderzoek kan je de uitkomst van verkiezingen voorspellen door simpelweg te tellen hoe vaak de namen van de kandidaten worden genoemd op Twitter. Zou je op deze manier ook kunnen voorspellen wie de nieuwe voorzitter wordt van de FNV?

Ik heb vanaf afgelopen vrijdag de tweets verzameld waarin de term ‘FNV’ voorkomt, tot nog toe zijn dat er ruim 2.500. In deze tweets wordt Ton Heerts 204 keer genoemd en Corrie van Brenk 146 keer (tweets waarin ze allebei worden genoemd laat ik buiten beschouwing). Kortom, als Twitter een goede graadmeter is (daarover valt natuurlijk te discussiëren) dan wordt de strijd spannender dan het aanvankelijk misschien leek.

De grafiek hierboven laat de resultaten zien voor de dagen waarvoor volledige gegevens beschikbaar zijn. Er was vanaf zaterdag aandacht voor Van Brenk (vanwege deze factcheck). Op zondag werd Heerts genoemd omdat hij te gast was in het tv-programma van Eva Jinek. Op 1 mei werden de kandidaten officieel bekendgemaakt en gingen ze met elkaar in debat.

Update - Bijgewerkt tot en met 13 mei, de laatste dag waarop gestemd kon worden. In totaal is Van Brenk 497 keer genoemd en Heerts 631. Inmiddels is bekend dat Heerts de verkiezing heeft gewonnen (uiteraard is daarmee nog niet gezegd dat de methode hout snijdt: om daar iets zinnigs over te kunnen zeggen zou je een flink aantal voorspellingen moeten kunnen beoordelen).
In de grafiek zijn onder meer de volgende invloeden zichtbaar: Factcheck bevestigt uitspraak Van Brenk (vanaf 27 april); Heerts bij Eva Jinek (28 april); officiële bekendmaking kandidaten (1 mei); debat bij Buitenhof (5 mei); belastingaffaire waarover Van Brenk’s Abvakabo FNV al aan de bel had getrokken (6 mei); interview Van Brenk op (9 mei); radio-optreden Van Brenk (10 mei); Heerts bij presentatie Techniekpact (13 mei); peiling EenVandaag voorspelt dat Heerts wint (13 mei).
De grafiek is mogelijk niet zichtbaar in oude versies van Internet Explorer.


Tweets heb ik verzameld via de Twitter Streaming API, op de manier die hier wordt beschreven. Daarbij heb ik gefilterd op de zoekterm ‘fnv’. Ik heb de gegevens met Python bewerkt en met R geanalyseerd (de code staat op Github). De grafiek is gemaakt met D3.js.
Ik heb nog gekeken naar hoe invloedrijk de twitteraars zijn (hoeveel volgers; hoe vaak opgenomen in lijsten) en naar hun achtergrond (noemen ze in hun profiel bijvoorbeeld de FNV). Het belangrijkste wat dit opleverde is dat twitteraars die Van Brenk noemen vaker ‘abva’ of ‘akf’ in hun profiel noemen - niet verassend aangezien Van Brenk momenteel voorzitter is van Abvakabo FNV.
Het Amerikaanse onderzoek naar Twitter als voorspeller van verkiezingsuitslagen is uitgevoerd door DiGrazia en anderen en is hier te vinden. Enkele opmerkingen over hun onderzoek:

  • Het klopt natuurlijk dat twitteraars maar een klein deel van de bevolking vormen en dat ze niet representatief zijn voor de hele bevolking. Waarschijnlijk wordt het beeld op Twitter vooral bepaald door een kleine, actieve incrowd. Ook klopt het dat een tweet waarin een kandidaat wordt genoemd niet altijd positief is; soms wordt er juist kritiek geuit. Ondanks dit alles bleek uit het onderzoek van DiGrazia e.a. dat het aantal keer dat een kandidaat op Twitter wordt genoemd een consistente voorspeller vormt van verkiezingsuitslagen. Wellicht vormt het aantal vermeldingen op twitter een indicator voor iets anders, bijvoorbeeld media-aandacht of hoe actief er campagne wordt gevoerd voor een kandidaat.
  • De methode biedt uiteraard geen zekerheid over wie er wint. Het kan voorkomen dat een kandidaat bijna 100% van de twittervermeldingen krijgt en toch verliest (althans dat suggereren de scatterplots die DiGrazia e.a. laten zien).
  • Het is onduidelijk in hoeverre de conclusies van het Amerikaanse onderzoek naar andere situaties kunnen worden gegeneraliseerd. Het is daarom wel een beetje een gok om met deze methode te voorspellen wie de nieuwe voorzitter van de FNV wordt.