champagne anarchist | armchair activist

Python

Using strava tweets to analyse cycling patterns

A recent report by traffic research institute SWOV analyses accidents reported by cyclists on racing bikes in the Netherlands. Among other things, the data show an early summer dip in accidents: 53 in May, 38 in June and 51 in August. A bit of googling revealed this is a common phenomenon, although the dip appears to occur earlier than elsewhere (cf this analysis of cycling accidents in Montréal).

Below, I discuss a number of possible explanations for the pattern.

Statistical noise

Given the relatively small number of reported crashes in the SWOV study, the pattern could be due to random variation. Also, respondents were asked in 2014 about crashes they had had in 2013, so memory effects may have had an influence on the reported month in which accidents took place. On the other hand, the fact that similar patterns have been found elsewhere suggests it may well be a real phenomenon.

Holidays

An OECD report says the summer accident dip is specific for countries with «a high level of daily utilitarian cycling» such as Belgium, Denmark and the Netherlands. The report argues the drop is «most likely linked to a lower number of work-cycling trips due to annual holidays».

If you look at the data presented by the OECD, this explanation seems plausible. However, holidays can’t really explain the data reported by SWOV. Summer holidays started between 29 June and 20 July (there’s regional variation), so the dip should have occured in August instead of June.

Further, you’d expect a drop in bicycle commuting during the summer, but surely not in riding racing bikes? I guess the best way to find out would be to analyse Strava data, but unfortunately Strava isn’t as forthcoming with its data as one might wish (in terms of open data, it would rank somewhere between Twitter and Facebook).

A possible way around this is to count tweets of people boasting their Strava achievements. Of course, there are several limitations to this approach (I discuss some in the Method section below). Despite these limitations, I think Strava tweets could serve as a rough indicator of road cycling patterns. An added bonus is that the length of the ride is often included in tweets.

The chart above shows Dutch-language Strava tweets for the period April 2014 - March 2015. Whether you look at the number of rides or the total distance, there’s no early summer drop in cycling. There’s a peak in May, but none in August - September.

Sunset

According to the respondents of the SWOV study, 96% percent of accidents happened in daylight. Of course this doesn’t rule out that some accidents may have happened in the dusk and there may be a seasonal pattern to this.

Many tweets contain the time at which they were tweeted. This is a somewhat problematic indicator of the time at which trips took place, if only because it’s unclear how much time elapsed between the ride and the moment it was tweeted. But let’s take a look at the data anyway.

I think tweets tend to be posted rather early in the day. Also, the effect of switches between summer and winter time is missing in the median post time (perhaps Twitter converts the times to the current local time).

That said, the data suggests that rides take place closer to sunset during the winter, not during the months of May and August which show a rise in accidents. So, while no firm conclusions should be drawn on the basis of this data, there are no indications that daylight patterns can explain accident patterns.

Weather

Perhaps more accidents happen when many people cycle and there’s a lot of rain. In 2013, there was a lot of rain in May; subsequently the amount of rain declined, and there was a peak again in September (pdf). So at first sight, it seems that the weather could explain the accident peak in May, but not the one in August.

Conclusion

None of the explanations for the early summer drop in cycling accidents seem particularly convincing. It’s not so difficult to find possible explanations for the peak in May, but it’s unclear why this is followed by a decline and a second peak in August. This remains a bit of a mystery.

Method

Unfortunately, the Twitter API won’t let you access old tweets, so you have to use the advanced search option (sample url) and then scroll down (or hit CMD and the down arrow) until all tweets have been loaded. This takes some time. I used rit (ride) and strava as search terms; this appears to be a pretty robust way to collect Dutch-language Strava tweets.

It seems that Strava started offering a standard way to tweet rides as of April 2014. Before that date, the number of Strava tweets was much smaller and the wording of the tweets wasn’t uniform. So there’s probably little use in analysing tweets from before April 2014.

I removed tweets containing terms suggesting they are about running (even though I searched for tweets containing the term rit there were still some that were obviously about running) and tweets containing references to mountainbiking. I ended up with 9,950 tweets posted by 2,258 accounts. 1,153 people only tweeted once about a Strava ride. Perhaps the analysis could be improved by removing these.

I had to add 9 hrs to the tweet time, probably because I had been using a VPN when I downloaded the data.

A relevant question is how representative Strava tweets are of the amount of road cycling. According to the SWOV report, about two in three Dutch cyclists on racing bikes almost never use apps like Strava or Runkeeper; the percentage is similar for men and women. The average distance in Strava tweets is 65km; in the SWOV report most respondents report their average ride distance is 60 - 90km.

In any case, not all road cyclists use Strava and not all who use Strava consistently post their rides on Twitter (fortunately, one might add). Perhaps people who tweet their Strava rides are a bit more hardcore and perhaps more impressive rides are more likely to get tweeted.

Edit - the numbers reported above are for tweets containing the time they were posted; this information is missing in about one-third of the tweets.

Here’s the script I used to clean the twitter data.

Nieuwe verhoudingen in de Amsterdamse politiek?

Afgelopen najaar discussieerden Amsterdamse politici op Twitter over de vraag of de verhoudingen tussen coalitie en oppositie veranderd zijn sinds de verkiezing van maart 2014, die resulteerde in een nieuwe coalitie.

Daar valt wel wat over te zeggen aan de hand van het stemgedrag over moties en amendementen gedurende de afgelopen twee jaar. Politiek gezien zijn voorstellen waar zo’n beetje iedereen het mee eens is niet zo interessant:

Een partij kan bijvoorbeeld een hoop moties indienen die heel breed gesteund worden, maar feitelijk weinig veranderen in de opstelling, laat staan het beleid, van de regering. Dit wordt in de literatuur wel «hurrah voting» genoemd: iedereen roept «hoera!», maar is er echt sprake van invloed? (Tom Louwerse)

In zekere zin zou je dat ook kunnnen zeggen over voorstellen die de steun hebben van de volledige coalitie. Interessanter zijn wat ik maar even x-voorstellen noem: voorstellen die niet de steun hebben van alle coalitiepartijen maar die toch worden aangenomen. In de Amsterdamse praktijk gaat het dan vaak om voorstellen waar de VVD tegen was. De verklaring is simpel: coalities zijn in Amsterdam relatief rechts, waardoor linkse coalitiepartijen meer medestanders buiten de coalitie hebben.

Laten we beginnen met de situatie voor de verkiezing van maart 2014. De PvdA was de grootste partij. De coalitie bestond uit GroenLinks, PvdA en VVD, maar de grotere linkse partijen PvdA, GroenLinks en SP hadden een comfortabele meerderheid. De grafiek hieronder laat de indieners van x-voorstellen zien. De pijlen laten zien van wie ze de steun wisten te krijgen om te zorgen dat deze voorstellen werden aangenomen.

De grootte van de cirkels correspondeert met de grootte van de fracties; de roze cirkels zijn coalitiepartijen. De dikte van de pijlen correspondeert met het aantal keer dat een partij een x-voorstel van een andere partij steunde. De richting van de pijlen zie je niet alleen aan de pijlpunt maar ook aan de kromming: pijlen buigen naar rechts.

Het beeld is duidelijk: PvdA en vooral GroenLinks waren de bruggenbouwers die steun wisten te vinden voor x-voorstellen.

En dan de situatie na maart 2014. Inmiddels is D66 de grootste partij en de coalitie bestaat uit SP, D66 en VVD. PvdA en GroenLinks zitten dus in de oppositie, maar ze blijken nog steeds een sleutelrol te spelen bij het aangenomen krijgen van x-voorstellen. Vooral GroenLinks is een bruggenbouwer: deze partij zat achter ongeveer de helft van de x-voorstellen.

De actiefste bruggenbouwer is Jorrit Nuijens (GroenLinks), gevolgd door Maarten Poorter (PvdA) en Femke Roosma (GroenLinks).

Methode

Gegevens zijn afkomstig van het raadsarchief van de Amsterdamse gemeenteraad. Daar kunnen uitslagen van stemmingen over moties en amendementen vanaf januari 2013 worden gedownload als Excelbestand. Het bestand (gedownload op 31 januari 2015) bevat informatie over 1.163 (versies van) voorstellen waarover is gestemd tot en met 17 december 2014.

Over dit Excelbestand zijn een aantal opmerkingen te maken. Aan de ene kant is het fantastisch dat deze informatie beschikbaar wordt gesteld. Aan de andere kant is dit bestand een beest dat slechts met flink wat regels code getemd kan worden. De manier waarop stemgedrag wordt vermeld varieert («verworpen met de stemmen van de SP voor», «aangenomen met de stemmen van de raadsleden van drooge en de goede tegen»); de structuur van de titel is in november 2014 aangepast; de Partij voor de Dieren wordt soms aangeduid met de volledige naam en soms met de afkorting; en soms is de tekst die het stemgedrag beschrijft ingekort omdat deze blijkbaar niet in een cel paste. Gezien de complexiteit van het bestand valt niet voor honderd procent uit te sluiten dat er een keer iets mis kan zijn gegaan met het classificeren van de voorstellen.

De analyse richt zich (noodzakelijkerwijs) vooral op zichtbare invloed. De eerste indiener wordt opgevat als initiatiefnemer. In de praktijk zal het vast wel eens voorkomen dat een initiatiefnemer een ander raadslid de eer gunt om eerste indiener te zijn.

De code voor het opschonen en analyseren van de gegevens is hier te vinden. De D3.js code voor de netwerkgrafieken is gebaseerd op dit voorbeeld.

A new balance in Amsterdam’s city council?

Last autumn, Amsterdam politicians discussed on Twitter whether the relations between coalition and opposition have changed since the March 2014 election, which resulted in a new coalition.

One way to look at this is to analyse voting behaviour on motions and amendments over the past two years. From a political perspective, proposals with broad support may not be very interesting:

For example, a party can propose a large number of motions that get very broad support, but materially change little in the stance, let alone the policy, of the government. In the litterature, this is sometimes referred to as «hurrah voting»: everybody yells «hurrah!», but is there any real influence? (Tom Louwerse)

In a sense, it could be argued that the same applies to proposals supported by the entire coalition. More interesting are what I’ll call x proposals: proposals that do not have the support of the entire coalition, but are adopted nevertheless. In the Amsterdam situation these are often proposals opposed by the right-wing VVD. The explanation is simple: Amsterdam coalitions tend to lean to the right (relative to the composition of the city council). As a result, left-wing coalition parties have more allies outside the coalition.

Let’s start with the situation before the March 2014 election. The social-democrat PvdA was the largest party. The coalition consisted of green party GroenLinks, PvdA and VVD, but the larger left-wing parties PvdA, GroenLinks and socialist party SP had a comfortable majority. The chart below shows the parties that introduced x proposals. The arrows show who they got support from to get these proposals adopted.

The size of the circles corresponds to the size of the parties; pink circles represent coalition parties. The thickness of arrows corresponds to the number of times one party supported another party’s x proposal. The direction of the arrows is not only shown by the arrow heads but also by the curvature: arrows bend to the right.

The image is clear: PvdA and especially GroenLinks were the main mediators who managed to gain support for x proposals.

And now the situation after March 2014. By now neoliberal party D66 is the largest party and the coalition consists of SP, D66 and VVD. This means that PvdA and GroenLinks are now opposition parties, but it turns out they still play a key role in getting x proposals adopted. GroenLinks initiated as many as half the x proposals.

The most active mediator is Jorrit Nuijens (GroenLinks), followed by Maarten Poorter (PvdA) and Femke Roosma (GroenLinks).

Method

Data is from the archive of the Amsterdam city council. Votes on motions and ammendments as of January 2013 can be downloaded as an Excel file. The file (downloaded on 31 January 2015) contains data on 1,165 (versions) of proposals, put to a vote until 17 December 2014.

A few things can be said about the Excel file. On the one hand, it’s great this information is being made available. On the other hand, the file is a bit of a beast that takes quite a few lines of code to control. The way in which voting is described varies (e.g., «rejected with the votes of the SP in favour», «adopted with the votes of the council members Drooge and De Goede against»); the structure of the title changed in November 2014; Partij voor de Dieren is sometimes abbreviated and sometimes not; and sometimes the text describing voting has been truncated, apparently because it didn’t fit into a cell. Given the complexity of the file, it can’t be exluded completely that proposals may have been classified incorrectly.

The analysis (by necessity) focuses on visible influence. The first name on the list of persons introducing a proposal is considered as the initiator. In reality, it will probably sometimes occur that an initiator will let someone else take credit for a proposal.

The code for cleaning and analysing the data is available here. The D3 code for the network graphs is based on this example.

Sevillanas. The Spanish punk

Update 11 January: Spotify data added.
According to the English Wikipedia page, «Generally speaking, a sevillana is very light hea[r]ted, happy music». There’s certainly some bland stuff around, but many sevillanas are explosive and raw. In fact, sevillanas are the punk of Spanish music.

I wanted to back this claim up by pointing to the length of the songs on the legendary Sevillanas de los Cuarenta album. It’s a known fact that punk is a genre with very short songs: on average 2:58 according to this analysis by blogger Dale Swanson. It’s the shortest of all the genres he analysed. Well, the average song length on the Sevillanas de los Cuarenta album is 2:44.

However, there may be some problems with this argument. First, some of the songs on the album have a haunting quality about them (for example, A flamenca no me ganas by Gracia de Triana), which makes you wonder if they haven’t been played too fast when they were recorded for CD. This may be an issue, but even if you correct for this the songs on Sevillanas de los Cuarenta would still be shorter than punk songs (for details see below, Method).

More problematic is the fact that short songs appear to have been normal in the 1940s. According to this analysis by Rhett Allain, average song lengths rarely exceeded 3 minutes until the end of the 1960s (see also the debate in the comments on possible explanations). So the shortness of the songs on the Sevillanas de los Cuarenta album isn’t that impressive. In fact, a (possibly non-representative) sample of 1970s sevillanas has an average song length of 3:22, which appears to be quite typical for the 1970s judging by Allain’s data.

The Musicbrainz database used by Allain doesn’t seem to contain many sevillanas. However, the Discogs website, which has data on millions of songs, does contain a few hundred sevillanas. Since posting the first version of the article, I realised metadata can also be obtained from Spotify. Spotify has over 2,500 songs with «sevillanas» in the title but only a few hundred songs per genre for other genres (probably the genre tags aren’t applied consistently). Below is the song length of a number of genres in the Discogs and Spotify databases.

For especially jazz and house, Spotify has other durations than Discogs. Other than that, median song durations are very similar. This is actually quite remarkable given the differences between the datasets. In both datasets, sevillanas tend to be somewhat longer than punk songs, but shorter than the other genres in the analysis.

An analysis by year might be interesting, but tricky: first because the release year in the Discogs data may refer to the year in which an album or song was re-released and second because the number of sevillanas tracks with sufficient information isn’t large enough for that level of precision. The Spotify dataset has no information on the release year of tracks (I guess if I really wanted I could have looked up the release date of the album each track is on).

All in all, the average sevillanas may be somewhat longer than a punk song. But you can still argue that a sevillanas song is in fact a series of even shorter songs, as illustrated by the plot of ¡Ay Sevilla! by Los de la Trocha shown above. The typical sevillanas is a series of short bursts of music that can be as abrupt as any punk song.

Method

Scripts for the analyses are available here.

Songs on Sevillanas de los Cuarenta too fast?
Spotify has three versions of A flamenca no me ganas: the one from Sevillanas de los Cuarenta (2:29 on cd) and two others lasting 2:37 and 2:41. This suggests it’s possible that the «correct» version is up to 8% longer than the one on Sevillanas de los Cuarenta. Even if you assume all the songs on the album should last 8% longer, the average length would become 2:56, still less than for punk. On the other hand, it’s doubtful that all songs on Sevillanas de los Cuarenta are too short. For example, Sevillanas del Espartero by Concha Piquer lasts 2:57 on Sevillanas de los Cuarenta, but Spotify has versions lasting only between 2:27 and 2:35.

1970s sevillanas
The sample of 1970s songs is from albums C, D and F of the HISPAVOX Sevillanas de Oro collection (cd versions), containing songs by los Marismeños, Amigos de Gines and others (not all Sevillanas de Oro albums contain the release year of the songs, but these do).

Discogs data
The Discogs data are available through an API and as monthly data dumps. I thought I’d spare myself the trouble of figuring out how the API works, so I opted for the data dump (the one for 1 December 2014). The downside is that the data is 2.8 GB zipped and 19.2 GB unzipped, so downloading and analysing the data takes a while.

The data dump is xml (the API should return json). I’m not really familiar with xml so I used some not very sophisticated, but effective, regex to sort it out. The data is organised in releases (e.g., albums) that have tags (e.g., for the year in which it was released and for genres and styles). The releases contain tracks that have their own tags, including duration. In order to filter out excessive track lengths I ignored any release containing the string mix and tracks with a duration longer than one hour.

Discogs uses hundreds of genre and style tags including some quite specific ones like ranchera and rebetiko, but not sevillanas. I decided to include only tracks with sevillanas in the title. This will exclude some legitimate sevillanas, but I reckon there probably won’t be too many false positives.

Spotify data
I accessed the Spotify data through their web api. As indicated in the article, genre searches resulted in only a few hundred results per genre, which suggests these tags are often omitted.

Plotting a waveform
Based on this discussion, plotting a waveform from a .wav music file using Python should be simple, but saving the plot turned out to be a problem (googling the error message OverflowError: Allocated too many blocks taught me I’m not the only one having that problem but I didn’t find a solution that worked for me). Instead I turned to R and found that the tuneR package will let you read and plot .wav files without a problem.

Identifying «communists» at the New York Times, by 1955 US Army criteria

A while ago, Open Culture wrote about a 1955 US Army manual entitled How to spot a communist. According to the manual, communists have a preference for long sentences and tend to use expressions like:

integrative thinking, vanguard, comrade, hootenanny, chauvinism, book-burning, syncretistic faith, bourgeois-nationalism, jingoism, colonialism, hooliganism, ruling class, progressive, demagogy, dialectical, witch-hunt, reactionary, exploitation, oppressive, materialist.

What happened in the 1950s is pretty terrible, but that doesn’t mean we can’t have a bit of fun with the manual. I used the New York Times Article Search API to look up which of its writers actually use terms like hootenanny, book-burning and jingoism. The results are summarised below.

Interestingly, many of the users of «communist» terms are either foreign correspondents or art, music and film critics. While it’s possible that people who have an affinity with the arts tend to sympathise with communism, an alternative explanation would be that critics have more freedom than «regular» journalists to use somewhat exotic and expressive terms like the ones the US Army associated with communism.

Also of interest is that one of the current writers on the list is Ross Douthat, the main conservative columnist of the New York Times. In his articles, he uses terms like materialist, oppressive, reactionary, exploitation, vanguard, ruling class, progressive and chauvinism. Surely he wouldn’t be a reformed communist - would he?

Method

The New York Times Article Search API is a great tool, but you have to keep in mind that digitising the archive isn’t an entirely error-free process. For example, sometimes bits of information end up in the lastname field that don’t belong there (e.g. "lastname": "DURANTYMOSCOW"). While it’s possible to correct some of these issues, it’s likely that search results will in some way be incomplete.

To get a manageable dataset, I looked up all articles containing any combination of two terms from the manual. I then calculated a score for each author by simply counting the number of unique terms they have used.

An alternative would have been to correct for the total number of articles per author in the NYT archive. It took me a while to figure out how to search by author using the NYT API. It turns out you can search for terms appearing in the byline using ?fq=byline:("firstname middlename lastname") - even though this option isn’t mentioned in the documentation. I’m not entirely sure such a search will return articles where the byline/original field is empty.

As you might expect, there’s a correlation between the number of articles per author and the number of unique terms this author has used.

All in all, it would be possible to calculate a relative score, for example number of terms used per 1,000 articles, but this may have unintended consequences. To take an extreme example: an author who has written one article which happened to contain three terms would get a score of 3,000 using this method, whereas an author who has thousands of articles and consistently uses a broad range of terms but not at a rate of three per article would get a (considerably) lower score.

I decided to stick with the absolute number of unique terms per author. This has the disadvantage that authors who have written few articles are unlikely to show up in the analysis, but I’m not sure that this problem can be adequately solved by calculating a relative score.

The Python and R code used to collect and analyse the data is available on Github.

Pages