Assignment 1-2

A short recap of the previous assignment: I’m using the Outlook on Life Surveys dataset and I’m interested in the relation between union membership and political participation (details here).

We’re required to write a programme that outputs frequency tables for a number of variables and discuss the output. The output’s supposed to be ‘interpretable (i.e. organized and labeled)’. I’m not entirely what is meant by that, but I’ve decided to recode the variables (e.g. 1 = ‘Yes’) and print the variable names and questions above the output. (If you’re logged in as a student, see the forum.)

The programme itself is posted here. Below I’ll discuss some of the output. For the sake of convenience, I’ll only show percentages (the raw counts can be obtained by running the programme). First, the current employment status of respondents.

PPWORK: Current Employment Status
Not working - retired                           21.011334
Not working - on temporary layoff from a job     1.264167
Not working - looking for work                  10.854403
Not working - disabled                           8.456844
Not working - other                              6.451613
Working - self-employed                          6.190061
Working - as a paid employee                    45.771578
dtype: float64

One of the variables I’m interested in, is union membership. My understanding of the American situation is that union membership is often dependent on whether your workplace is organised (by contrast, in the Netherlands it’s not uncommon for unemployed or retired people to be union members). For that reason, it makes sense to look specifically at respondents who are working as paid employees. (The fact that union membership is measured at the household level complicates matters but that doesn’t change my preference to focus on paid employees.)

1,050 respondents (46%) are paid employees. This would seem to be a sufficiently large group for the purposes of the analyses I plan to do. In the programme, I created at subset of respondents who indicated they are working as paid employees. The output below is based on this subset.

Next, the numbers for the variable on union membership (as indicated, at the household level).

W1_P8: Does anyone in your household currently belong to a union?
No         78.380952
Refused     1.142857
Yes        20.476190
dtype: float64

Within the subset of respondents with paid employment, little over 20% indicate that at least one person in their household is a union member. This compares to a union density of 11.1% among wage and salary workers in the US according to the Bureau of Labour Statistics.

Some of that difference can be explained by the fact that the 20% figure will include some respondents who aren’t union members themselves but who have someone in their household who is. On the other hand, the BLS is a bit more persistent in assessing union membership, and would likely classify some people as union members who wouldn’t be classified as such in the OOL surveys.[1] All in all, I’m inclined to say the 20% figure in the OOL surveys is higher than expected and that there is a possiblity that the survey sample is in some way biased towards union members.

And finally the political participation measures.

W1_L4_A: [Contacted a public official or agency ] Please indicate if you have done any of the following activities in the last 2 years.
No         74.095238
Refused     2.190476
Yes        23.714286
dtype: float64
W1_L4_B: [Attended a protest meeting or demonstration ] Please indicate if you have done any of the following activites in the last 2 years.
No         90.571429
Refused     2.190476
Yes         7.238095
dtype: float64
W1_L4_C: [Taken part in a neighborhood march ] Please indicate if you have done any of the following activites in the last 2 years.
No         93.047619
Refused     2.095238
Yes         4.857143
dtype: float64
W1_L4_D: [Signed a petition in support of something or against something ] Please indicate if you have done any of the following activites in the last 2 years.
No         58.190476
Refused     2.380952
Yes        39.428571
dtype: float64

Respondents are more likely to have signed a petition or contacted an offical than to have hit the streets. This is as expected.

Finally a word on missing values. For all variables considered here, the percentage ‘refused’ is below 2.5%. This would seem sufficiently low not to expect any problems arising from this.

PS One of the students who reviewed my first assigment suggested I include ‘canvassing’ as a measure of political participation, which seems to make sense. Unfortunately the dataset doesn’t seem to include this aspect, but there are variables on other types of political participation that I may add in the future.

  1. «Employed wage and salary workers are classified as union members if they answer “yes” to the following question: On this job, are you a member of a labor union or of an employee association similar to a union? If the response is “no” to that question, then the interviewer asks a second question: On this job, are you covered by a union or employee association contract? If the response is “yes,” then these persons, along with those who responded “yes” to being union members, are classified as represented by a union. If the response is “no” to both the first and second questions, then they are classified as nonunion.»  ↩

Assignment I - 1

In our first assignment, we’re required to pick a data source that we’ll use in our upcoming assignments and think about the kind of analyses we’d like to do with it.

I’m a researcher at a Dutch trade union and I’m also interested in the broader social role of unions. I know that there’s quite some evidence that union members are more likely to turn out to vote in elections than non-members, at least so in the US, but I’m interested whether union membership is also related to other forms of political participation.

The Outlook on Life Surveys contain variables on activities like contacting a public offical or attending a protest meeting. They also contain variables on membership in organisations and social movements such as Occupy Wall Street and the Tea Party. They do not contain information on whether the respondents themselves are union members, but they do contain a question whether anyone in the respondent’s household is currently a union member. While it may turn out to be problematic for my purposes that there’s no information on union membership of the respondents themselves, I still think this looks like an interesting dataset and I decided to pick this one for my assignments.

Next, we’re required to find relevant literature using sources like Google Scholar. I used the search terms union membership participation and one of the publications I found is a study by Kerrissey and Schofer (2013) on union membership and political participation in the US. Kerrissey and Schofer find that union membership is associated with various measures of political participation including voting, participating in protests, joining voluntary associations, and donating money to political campaigns (controlling for a number of variables). The effect is larger for lower-educated persons - likely because they have fewer alternative sources of political capital.

All kinds of interesting questions arise: for example, does membership in other types of organisations and social movements have a similar association with political participation as union membership? And what is the role of the institutional context (c.f. Cebolla-Boado and Ortiz 2014)? But for now I’ll focus on a more straightforward question.

I plan to - sort of - replicate Kerrissey and Schofer’s study using a different dataset (the Outlook on Life Surveys) and a slightly different independent variable (union membership at the household level instead of the individual level). My hypothesis is that political participation (as measured by a number of variables) will be higher for respondents with a union member in their household.


Cebolla-Boado, Hector and Luis Ortiz (2014). Extra-representational types of political participation and models of trade unionism: a cross-country comparison. Socio-Economic Review (12:4), 747–778.
Kerissey, Jasmine and Evan Schofer (2013). Union Membership and Political Participation in the United States. Social Forces (91:3), 895–928.

Coursera Data Analysis and Interpretation

I was initially introduced to R by Nathan Yau’s Visualize This, but subsequently I learned a lot about R through some of the courses in Brian Caffo, Roger Peng and Jeff Leek’s Data Science Specialization at Coursera. In fact, the course was a reason for me to postpone switching from R to Python.

By now, I’ve decided to make the switch anyhow, and I think I’ve found another Coursera specialisation that will help me learn the tricks: Lisa Dierker and Jen Rose’s Data Analysis and Interpretation. It’s kind of basic, at least at the beginning, but that’s good. Some of the assignments require you to blog about a project of your choosing, so I’ll be posting about my homework here.


Can mistyped urls deliver representative samples?

An article on the Washington Post’s Monkey Cage blog describes how researchers managed to carry out opinion polls on executions in Bahrain, «one of the most difficult countries in the region for such sensitive research». In order to overcome the difficulties encountered, they ran two ‘innovative surveys’ in partnership with research company RIWI.

RIWI takes advantage of the fact that people sometimes make mistakes when they type a url in the address bar of their browser. If the url they mistakenly go to happens to be controlled by RIWI, they are redirected to a short questionnaire. RIWI claims this is a cheap way to obtain a non-biased sample.

This sounds like a smart approach that might actually work. But does it? Some people have doubts, such as one of the commenters on the Monkey Cage post:

Innovative is certainly one way to describe it. How can you possibly consider Internet typo redirects as a nationally representative sample? Would be very curious to see what the raw demographics look like compared to the population. Hope there was some sophisticated weighting used.

In a recent article in Nature, RIWI founder and CEO Neil Seeman explains his method. In a comment, one Charles Packer observes:

There are no citations here of publications that assess the validity of the company’s claims. Same for the corporate website: no discussion of the mechanics of its methodology.

When I searched the company on Google, I found a lot of articles aimed at investors and very few discussing their research methods. The most detailed description of their methodology I found is in Seeman’s patent application. It explains that, for example, «Google could harvest the many thousands of users who inadvertently type in instead of and direct them to an online polling page, instead of simply redirecting them to the web site».

The main type of typos RIWI uses seems to be those where people type .cm, .co or .om instead of .com. RIWI uses the respondents’ IP addresses to guess their location. In his patent application, Seeman claims that his approach is successful in reducing bias:

Under the invention, every individual Internet user around the globe has the equal probability of being drawn into the potential respondent pool. This dramatically reduces selection bias and coverage bias as compared to all other current techniques of respondent identification and selection online. There is no reason to believe that the people who fail to randomly fall into the potential survey population (i.e., who do not make the typographical error) have distinct characteristics from the people who do, thus increasing the validity of the results. This makes the process of respondent selection scientifically valid, superior even to random digit telephone dialing.

Is that true? While their claims sound plausible, it’s still conceivable that bias occurs. For example, through the selection of urls RIWI uses; because people who tend to make typos may be different from people who don’t; or because people who directly type urls into the address bar of their browser may be different from people who prefer to google for sites.

It has been claimed that RIWI has predicted election results in Egypt and Turkey more accurately than other firms. That sounds promising, but it would be helpful to know how many election outcomes RIWI has predicted and how accurate all of these predictions were. RIWI also refers to a validation study of one of their US samples, but the original study seems to have been removed from their website. The website’s FAQ says ‘third party and academic review’ is available, but only on request: «Yes, but please contact us first so we can get a sense of your needs and most applicable information to send you».

It’s quite possible that RIWI’s approach is superior to the survey panels used by other firms, but more openness about their methodology and results would make their case more convincing.


Base versus ggplot2

Yesterday, stats guru Jeff Leek confessed the ultimate unpopular opinion in data science: «I don’t use ggplot2 and I get nervous when other people do» (if you haven’t a clue what this is about, you may want to skip this post altogether). His confession met with ridicule, more riducule, and an occasional «oh my god I thought I was the only one!».

I sort of assumed everybody uses ggplot now. I was wrong: I like base for graphics, is that weird? * Buena referencia para graficas base! (si como yo, odian ggplot). * base graphics FTW re:slopegraph (it’s a royal pain to do this in ggplot). * I’m not a fan of ggplot. * Retro! * I kinda hate ggplot. * Vigorous group discussion on the merits of base plot vs #ggplot in #rstats.

For me, it’s six of one and half a dozen of the other: I’m planning to switch to Python.


Rabid feminists, fans and rightwingers

The Oxford Dictionary (the default dictionary on Mac OSX) has been accused of sexism in the examples it provides to illustrate how words are used. The debate focused on its definition of rabid: 1. having or proceeding from an extreme or fanatical support of or belief in something: a rabid feminist. 2. (of an animal) affected with rabies. her mother was bitten by a rabid dog. Why this example? Why portray feminists as rabid?

Apparently, the Oxford Dictionary first ridiculed the critique, but later issued a statement:

We apologise for the offence that these comments caused. The example sentences we use are taken from a huge variety of different sources and do not represent the views or opinions of Oxford University Press. That said, we are now reviewing the example sentence for «rabid» to ensure that it reflects current usage.

«In other words, it’s not the dictionary that’s sexist, it’s the English-speaking world», David Shariatmadari commented in the Guardian. He adds a warning that the review the dictionary plans to do may well find that rabid in fact does occur more often in combination with feminist than with other words (especially if online discussions are included). Even so, the dictionary cannot simply hide behind a word count - they’re still responsible for the editorial choices they make.

And how about the Guardian itself? The table below lists the words that appear most frequently after rabid in Guardian articles since 1999. The words have been stemmed so as to lump together terms like racist and racists.

term count
dog 136
anti 86
fan 63
right 33
support 21
nationalist 18
anim 15
press 14
rightwing 14
tori 13
republican 13
fanbas 13
rightw 12
puppi 11
follow 10
critic 9
antisemit 9
nation 9
racist 9
bat 9
feminist 9
crowd 9
home 9

The term anti deserves a separate analysis. The table below lists the most frequent words matching the pattern rabid anti[\s|\-]([a-z]+), again reduced to their stem.

term count
semit 11
european 10
communist 9

Terms like dog, anim[al] and bat obviously have to do with the second meaning of the term rabid (affected with rabies). Other than that, it’s clear that rabid is far more often used in combination with fan or rightwing than feminist. At least so in the Guardian.


I simply adapted the code I wrote earlier to analyse use of the term illegal in the Guardian and the New York Times.


Minister Jeroen Dijsselbloem takes up data visualisation challenge

Every year, Dutch Finance Minister Jeroen Dijsselbloem sends a report to Parliament on state participations - companies that are (partially) owned by the state. Recently, the minister answered questions from the Finance Committee of the Lower House. One of them questioned the use of a stacked bar chart to show dividends, «since this isn’t very clear». The minister acknowledges the problem and takes up the challenge:

In creating this bar chart we aimed at comprehensiveness by including all dividends received from all state participations. Because of the large differences in dividend, this results in sub-optimal readability. For the 2015 annual report, it will be considered whether the readability can be improved without making concessions to comprehensiveness.

I’m sure he’ll be interested in good ideas, so if you have any suggestions for improving the chart, tweet them to @j_dijsselbloem. And if you want to give it a try yourself: here’s the data for 2010–2014.

Update April 2016 - Jean Adams shows how the chart can be improved.[1]

Update October 2016 - Alas. Last month, a new issue of the participations report was sent to Parliament. Apparently, the minister hasn’t succeeded in improving the dividend chart; the chart has now been omitted. Instead, the report now has a line chart showing just the total amount of dividend received (further, the simple bar chart showing dividend received per participation for the most recent year has been replaced with a pie chart).

  1. Adams correctly points to a discrepancy between the csv and the original chart: the csv contains data on total dividend paid, whereas the original chart shows the amount received by the state (the two are different for companies owned for less than 100% by the state).  ↩


Solid reputation of Statistics Netherlands (CBS) ‘at risk’

Statistics Netherlands (CBS), the Dutch national statistics office, has always had a solid, if somewhat dull, reputation. The organisation published data, but didn’t do projections and was reluctant to offer interpretations. Meanwhile, it was considered to be among the best statistics offices in the world. But over the past two years, there have been some changes.

In 2014, the newly appointed director of the CBS said in an interview (in Dutch) that he wanted his organisation to participate in public debates. Not to express opinions, he assured, but to correct «inaccurate representations». Asked for an example, he referred to the Pikkety debate. He felt that data about inequality had been used to provoke a response of «emotional aversion».

In early 2015, the CBS developed a strategic agenda. Some elements of this agenda were about its core business. For example, the CBS wants to automate in order to become less dependent on spreadsheets and manual data processing - which seems to make sense. But the emphasis was on becoming a «news organisation» with a «prime time focus».

Today, Rutger Bregman of De Correspondent has published an analysis (in Dutch) of the new course of the CBS. The organisation plans to stop collecting data on a wide range of topics, including private debts to car dealers and credit card firms, and patients’ satisfaction with health care. Meanwhile, it has invested in a «newsroom».

Bregman discusses a number of instances where the CBS took a position in charged political debates on topics like inequality and the effects of child care cuts. He argues that its role in those debates was dubious. For example, the CBS said that participants in support programmes for job seekers are more likely to find a job than non-participants, without pointing out that this says nothing about the effectiveness of these programmes. Of course, the broader issue is that the CBS gets caught up in controversies, which may undermine public confidence in its data.

Public funding of the CBS has been cut. Income from external clients has risen from 5% to 15% and is expected to reach 25% by 2019, according to a chart in Bregman’s article. The government has sent a proposal to Parliament to dismantle the independent body that determines the research programme of the CBS (an amendment to preserve the independence of the CBS will put to a vote on Tuesday). Bregman concludes:

[…] data is easily misused. A statistics office that wants to offer more interpretation, wants to make the headlines more often, wants to earn more money and has less oversight, runs more of a risk to do so, no matter how you look at it. The CBS has become world-class precisely by resisting this temptation.

In his article, Bregman indicates he sent his article to the CBS last week, but apparently they declined to comment. Today, their chief economist has responded on Twitter to one of the controversies discussed by Bregman. According to one of their researchers, Bregman’s article has created quite a stir within the CBS already.


Power and buzz: Analysing trade union HQ locations by closeness to power and by convenience store score

When Hans Spekman ran for chairman of the Dutch Social-Democrat party in 2011, he said he wanted to move the party’s headquarters from the posh office at the Herengracht in Amsterdam to a «normal district, a neighbourhood where things happen, like Bos en Lommer». Bos en Lommmer is a multicultural neighbourhood in the west of the city, in transition from deprived to gentrified.

I agree with Spekman (at least on this matter) and I think his ideas about locations should also apply to trade union headquarters. Out of curiosity I decided to analyse the headquarters locations of European trade unions, using two criteria. First: closeness to power, operationalised pragmatically as the walking distance from the union office to the national parliament. And second: the liveliness of the neighbourhood. For measuring this I propose the convenience store score, which assumes that the number of convenience stores within half a kilometer gives a rough indication of how lively a neighbourhood is. Convenience stores could be for example 7-Eleven or AH to go stores and some ethnic shops will also be classified as convenience stores.

The chart below shows the scores for each union. You can also see the locations of union offices, parliaments and convenience stores on an interactive map, but note that the map may take a while to load - it’s not very suitable for viewing on a smartphone.

The median union headquarters is within 2km walking distance from parliament. For about three-quarters of unions, the distance is below 5km. The general pattern thus seems to be that unions have their national offices close to the institutions of political power. There are exceptions though. Officials of the major Dutch federations FNV and CNV would have to walk 15 to 68km to reach parliament. And sometimes the distance is even longer: a Basque union has its HQ in Bilbao; a Turkish union in Istanbul and Polish union Solidarnosz has its HQ near the port of Gdansk, where it originated. But all in all, the large Dutch unions are quite exceptional in that they don’t have their headquarters near the centre of political power.

As for liveliness: the median number of convenience stores within half a kilometer from union headquarters is 2, but about one in three unions have no convenience stores nearby at all. Some of the most lively union office locations are in countries like Romania, Hungary and Bulgaria. Other examples are CFDT (France), TUC (UK), SAK (Finland) and UGT (Spain). Dutch unions are at the other end of the spectrum and have rather dull headquarters locations - judging by the convenience store score.

So where should a union be? I’d say that influencing the government is one of the tasks unions should be doing, and an important one at that. However, this doesn’t depend on having a headquarters close to parliament, but rather on the ability to mobilise workers. I’d argue that the convenience store score is a far better criterium to judge headquarters locations by.

In case you were wondering: Spekman was successful in his bid for the chairmanship of the Social-Democrat party. The party’s headquarters is still at the Herengracht, though: it turned out the lease doesn’t expire until 2018.

Full disclosure: I work at the FNV, at the former FNV Bondgenoten location.


This analysis turned out to be quite a bit more challenging than I initially thought, but it was very instructive. I’m especially happy that I now have a basic understanding of the Overpass API that you can use to retrieve Open Street Map data. OSM has always been a bit of a black box to me but the Overpass API turns out to be a valuable tool.

Measuring neighbourhood characteristics

Initially I wanted to use Eurostat regional stats to analyse neighbourhood characteristics, but Eurostat doesn’t have data beyond the NUTS 3 level (I should’ve known). Level 3 areas may comprise entire cities and are useless for analysing neighbourhoods, so I had to look for alternatives.

Subsequently, I tried getting the name of the smallest area a location is in using the Mapit tool (based on Open Street Map). I thought I might then be able to construct a Wikipedia url by adding the name to This turned out to work pretty well, not least because Wikipedia is quite good at handling different variants of geographical names. However, while Wikepedia articles tend to be informative, they do not contain a lot of uniform statistical information. Often population, area and population density will be included, but not much beyond that. In addition, the fact that the size of the areas varies poses problems. For example, the population density of a small area cannot be meaningfully compared to the density of a large area. In the end I did add the Wikipedia links to the popups on the map, but I continued looking for other ways to analyse neighbourhood characteristics.

One of the measures I ended up using is closeness to power, operationalised as the walking distance to the national parliament (in countries with a bicameral parliament, I used the location of the lower house). This was a pragmatic choice. An alternative would have been to use the location of ministries, but then I’d have to come up with a way to pick the relevant ministry.

For measuring the liveliness of a neighbourhood, I used the number of convenience stores within half a kilometer, using data from Open Street Map. Obviously there are some limitations to this method. For example, some countries will be mapped in more detail than others. Also, there will be inconsistencies in how shops are classified (cf this discussion in Dutch about how to classify stores of chains like Blokker).

Obviously, the convenience store score has not been properly validated. I’m not even sure whether objective measures of a neighbourhood’s liveliness exist. I checked this list of «coolest» neighbourhoods in Europe and all but one (Amsterdam Noord) have convenience stores nearby, but then again coolness isn’t the same as liveliness (I guess a neighbourhood can be uncool yet lively). Furthermore, being on a list of cool neighbourhoods isn’t necessarily an indicator of coolness.

Ideally I think a proper assessment of the convenience store score should include a comparison with measurements of criteria derived from Jane Jacob’s The death and life of great American cities: mixed primary uses, short blocks, buildings of various ages and density. I guess it should be possible to measure some of these with OSM data (especially the first two). However, that would require a deeper understanding of OSM classifications than I currently have.

Getting the data

While some of the data was obtained by good old-fashioned googling, some of it could be automated.

The starting point for the analysis was the list of affiliates of the European Trade Union Confederation (ETUC). Note that this includes unions in non-EU countries such as Turkey. Also note that I use the word union but most are in fact union federations (the FNV is a bit more complicated; a recent merger has partly done away with the federation structure).

The ETUC doesn’t seem to have a list of addresses on their website. They do provide urls for most of their affiliates. Still, looking up addresses was a bit of an adventure, especially for countries which use non-Latin alphabets (let me know if you find any errors).

For walking distances I used the Bing API. In a number of cases Bing couldn’t find a walking route or the distance seemed wrong. In those cases I manually looked up the distance in Google Maps. Here’s a sample url for getting information from the Bing API (replace KEY with API key).

I used the Overpass API (demo) of Open Street Map to get all nodes within 500m from the union HQs, which I used for counting the number of convenience stores. I also used the API for getting the coordinates of all convenience stores in all countries where the ETUC has affiliates. Here’s a sample url for getting all nodes within 500m of a location, and here for getting all convenience stores in a country.

A few unions are missing in the final results because of missing data. For example, I couldn’t figure out what the main office of the Belgian ACV is and I couldn’t find the exact location of the parliament of Malta (somewhere along Republic Street, Valletta).

Calculating scores

I calculated scores as either walking distance to parliament in kilometers or the number of nearby convenience stores. In both cases I took the log10 of the value + 1. To arrive at a 0 to 10 scale, I multiplied by 10 and divided by the maximum score for each variable. For the distance to power measure I converted the score to 10 minus the score, so that a higher score means closer to power.


I used Leaflet and D3.js to map the locations of HQs, parliaments and convenience stores. There are over 60,000 convenience stores in the dataset. This turned out to be a bit too much and the browser all but crashed. I found this script that deals with exactly this problem. While I managed to figure out what I needed to change to make the script work with my data, I’m afraid I don’t fully understand how it works. It’s still too slow for mobile, though.

The political effects of financial crises

In a fascinating study, Manuel Funke, Moritz Schularick and Christoph Trebesch analysed the social and political aftermath of 103 financial crises. During the five years following a financial crisis, the following pattern can be expected:

  • The vote share of far right parties increases by 30%. For far left parties, such an effect was not found. «After a crisis, voters seem to be particularly attracted to the political rhetoric of the extreme right, which often attributes blame to minorities or foreigners».
  • The fragmentation of politics increases and the vote share of coalition parties diminishes.
  • There is more frequent government instability and a higher probability of executive turnover.
  • The average number of anti-government protests almost triples; the number of violent riots doubles (but this effect is lacking in the post-WW2 period) and general strikes increase by at least one-third.

Sounds familiar. Interestingly, the researchers have also looked into long-term effects:

The graphs demonstrate that the political effects are temporary and diminish over time. 10 years after the crisis, almost all variables are back to their pre-crisis levels. The top panel shows that the increase in far-right votes is no longer significantly different from zero after year 8.

The authors ascribe the rise of the Dutch Party for Freedom (5.9% in 2006, 15.5% in 2010) to the crisis of 2008, so the historical pattern suggests their popularity will diminish by 2016.

Or does it? The graph the authors refer to helps to clarify this matter. There’s no evidence that the popularity of far right parties diminishes in the longer term. What they’re describing is that the confidence interval (the grey area) widens. So much so that you can’t really predict on the basis of the available data what will happen after eight years.

Another matter is the interpretation of the effects. Funke e.a. consider the political instability following financial crises a «political disaster»:

These developments likely hinder crisis resolution and contribute to political gridlock. The resulting policy uncertainty may contribute to the much debated slow economic recoveries from financial crises.

They seem to imply that governments tend to take appropriate measures and that therefore, having a strong government is good for economic recovery. That’s debatable. People like Paul Krugman and Ewald Engelen argue that the austerity policies of especially European governments have a negative impact on economic recovery.

This is relevant, for previous research found that the same social upheaval Funke a.o. associate with financial crises can also be explained as an effect of austerity policies. This raises the question how causality works here: are social (and political) unrest caused by financial crises, or by the way in which governments respond to these crises? Perhaps the stubborn austerity policies of the European and Dutch governments have contributed to the continuing popularity of the Party for Freedom?

Funke a.o. describe their research here; Statewatch has put the original article (pdf) online (I discovered the study via an article by Krugman). The earlier study on austerity and protests was done by Jacopo Ponticelli and Hans-Joachim Voth (I wrote a post on it a couple years ago).