champagne anarchist | armchair activist

DuckDuckGo shows code examples

Because of Google’s new privacy warning, I finally changed my default search engine to DuckDuckGo.[1] So far, I’m quite happy with it. I was especially pleased when I noticed they sometimes show code snippets or excerpts from documentation on the results page.

Apparently, DDG has decided that it wants to be «the best search engine for programmers». One feature they’re using are the instant answers that are sometimes shown in addition to the ‘normal’ search results. These instant answers may get their contents from DDGs own databases - examples include cheat sheets created for the purpose - or they may use external APIs, such as the Stack Overflow API. Currently, volunteers are working to improve search results for the top 15 programming languages, including Javascript, Python and R.

One could argue that instant answers promote the wrong kind of laziness - copying code from the search results page rather than visit the original post on Stack Overflow. But for quickly looking up trivial stuff, I think this is perfect.

  1. I assume the contents of the privacy warning could have been reason to switch search engines, but what triggered me was the intrusive warning that Google shows in each new browsers session - basically punishing you for having your browser throw away cookies.  ↩

Exploring traffic lights with location data from cyclists’ phones

In 2006, Amsterdammers voted Frederiksplein the location with the most irritating traffic light. Now, ten years later, data from the Fietstelweek (Bicycle Counting Week) offer a unique opportunity to map how much time cyclists lose at traffic lights. During Fietstelweek, over 40,000 Dutch cyclists have shared their location data using a smartphone app. Some of the findings are summarised on the map above, which shows quite a few red dots - locations where cyclists lose on average 30 seconds or more.

Some of those bottlenecks also featured in the 2006 top-ten of irritating traffic lights, including the ‘winner’ of the time, the Frederiksplein. And many red dots are on the Plusnet Fiets, a network of essential cycling routes where the municipality would prefer an average delay of at most 20 or 30 seconds.[1]

The data only allows for a general exploration of cycling bottlenecks. In order to understand more precisely what’s going on, one would have to analyse each crossing separately. At a few locations, average delays of over two minutes have been observed - perhaps traffic lights are not the sole explanation of those delays.

The data from the Fietstelweek were collected in September. The situation may well have changed since at some locations. A good example is the Muntplein, where cycling is pretty smooth now - thanks to Alderman Litjens who banned most cars and removed traffic lights. A change that occured before the Fietstelweek is the removal of traffic lights at the Alexanderplein. And it shows: all dots are green there.

Cyclists’ organisation Fietsersbond wants traffic lights adjusted to create shorter waiting times for cyslists. Research has shown this to be a measure that is very effective and relatively easy and cheap to implement. But it’s not just about technical improvements; future policies should make ‘radical choices’ in favour of bicycle and pedestrian traffic, in order to prevent the city coming to a standstill due to congestion.

This seemed like a good occasion to organise a follow-up poll on traffic lights. Click here to vote for Amsterdam’s most irritating traffic light - 2016 edition.


The Fietstelweek is an initiative of cyclists’ organisation Fietsersbond and a number of consultancies and research organisations. Between 19 and 25 September 2016, over 40,000 cyclists have used an app to share their location data. The Fietstelweek data has been made available (thanks!) on condition that derived products are also made available as open data. The processed data of my analysis is here and the code for processing the data here and here.

The Fietstelweek data is available in the form of routes, links (intensity and speed) and nodes (delays). The nodes data contains a variable tijd (time). This is the delay along the trajectory between 50m before and 50m after the node, relative to the time the cyclist would normally take to cycle 100m (thanks Dirk Bussche of NHTV Breda university of Applied Sciences for details on how the data was processed).

The dataset contains over 750,000 nodes. I filtered them in three steps: only nodes that are within a square around Amsterdam; only nodes near traffic lights and only nodes with at least 50 observations. This resulted in 1,845 nodes with almost 400,000 observations. For details see the scripts.

Data on traffic lights is from the municipality.

  1. In a new policy to be decided early 2017, the municipality indicates that the average waiting time for cyclists, measured at the busiest hour, should not exceed 45 seconds. At the Plusnet Fiets, it is further deemed desirable that the maximum delay doesn’t exceed 20 seconds at busy crossings an 30 seconds elsewhere. Delay times include the effect of slowing down and accelerating.  ↩


Time on the y-axis

Normally, charts have time on the x-axis, moving from left to right. Earlier this year, Alberto Cairo wrote an article on charts that have time on the y-axis. This may make practical sense if you want to show developments over time on a political left-right scale. He also pointed to the use of mobile screens:

As a final note, here’s a prediction: as a majority of readers are accessing their news through smartphones […] which are usually held upright and navigated by scrolling vertically, vertical time-series charts with time on the Y-axis will become more common in the next few years. Will we witness a new visual convention being born?

Now Kaiser Fung discusses a few charts by the Washington Post (aptly described as troll hair charts) and the New York Times that also have time on the y-axis. They’ve made different choices regarding the direction of time: «The Post’s choice of top to bottom seems more natural to me than the Times’s reverse order but I am guessing some of you may have different inclinations.» Which suggests that the convention of showing time on the y-axis hasn’t crystallised yet.

Based on the connection with scrolling on mobile screens, the Washington Post’s top-to-bottom approach may well emerge as the standard approach.


Inequality in elections

There’s been a bit of fuss about turnout in the American presidential election, but turnout inequality is an issue in the Netherlands too. Youth, low-educated people and people with lower incomes are less likely to vote, possibly because they have little faith politicians will take their interests at heart.

Income, turnout and voting behaviour vary across neighbourhoods as shown by the confetti plot below, which uses the Amsterdam results of the 2012 Lower House election as an illustration.

The picture is clear: in rich neighbourhoods, more people vote, and they’re more likely to vote VVD or D66 - parties that favour free-market economics. In poorer neighbourhoods, the social-democrat PvdA and the socialist SP are more popular, but fewer people turn out to vote.

Given the large differences in turnout, it’s surprising that hardly any serious turnout campaigns have been run in the Netherlands. There’s ample scientific research on the effectiveness of such campaigns.

Click the urls below the chart to show turnout, left votes or liberal votes. Here is a larger version of the chart - even though this may not make much difference on a mobile screen.


Comparing neighbourhood-level election results with income data on the residents of these neighbourhoods is somewhat problematic because voters aren’t required to vote in their own neighbourhood. I have excluded a few neighbourhoods, including Station-Zuid WTC en omgeving, because they have polling stations at railway stations where relatively many people from other neighbourhoods vote.

The correlations are pretty robust. You’ll also find them by analysing voting behaviour in Amsterdam neighbourhoods in the 2014 city council election, or differences between municipalities across the Netherlands in the 2012 Lower House election (in the latter case, correlations are somewhat weaker). Data and scripts here.


bullshit #dataviz

Donald Trump won because Hillary Clinton failed to get the vote out. At least, that’s the story this heavily retweeted chart seems to tell (click the chart for a larger version). But according to data visualisation expert Alberto Cairo it’s an example of the kind of bullshit #dataviz we need to fight against. In fact, many people have criticised the chart, for a number of reasons, including:

  • The y-axis, obviously.[1] The chart suggests Clinton got about half as many votes in 2016 as Obama in 2012, which of course isn’t true. Some have argued that truncating the y-axis is justifiable in this case because otherwise small differences wouldn’t show. However, with a y-axis starting at zero, you can still see what’s going on.
  • Why is it showing only the latest three presidential elections? Add data for elections before 2008, and the picture becomes quite different.
  • Not all votes have been counted yet. At some point, Nate Cohn of the NYT has predicted that Trump will get 61.2 million votes and Clinton 63.4 million, when all votes are counted. That would also change the picture considerably.

So who created this bullshit #dataviz and why? The earliest version I could find is by Economics Professor D Yanagizawa-Drott.[2] My guess is that he created the chart as a quick-and-dirty attempt to understand what happened on 8 November, never expecting it to go viral, and that he never gave much thought to its execution.[3] While the chart design is problematic, the idea behind it - explore how turnout affected the outcome of the election - makes sense.

Meanwhile, the post-election dataviz deluge highlighted another problem. People post charts without indicating the source of the data they used. To make matters worse, other people will simply copy and post that chart without saying who they got it from. There should be a rule that if you post a chart, you should indicate the data source and who created the chart - or at least where you found it.

  1. Jonathan Webber, who was among the people to make the chart popular, has a bio that says Trolling y-axis mavens since 2016 (I assume he added this line in response to criticism of the chart).  ↩

  2. I wonder whether it’s possible to systematically search for images on Twitter?  ↩

  3. He introduced the chart as «A quick look at turnout data». When someone said the y-axis should start at zero, he responded: «True. Also contact Microsoft Excel, let them know the default y-axis is simply unacceptable; lazy people like me need nudging.»  ↩