champagne anarchist | armchair activist

Academic support for Mélenchon, mapped

On Sunday, the first round of the French presidential election will be held. Left-wing candidate Jean-Luc Mélenchon has surged in the polls and rightwingers have called his programme devastating. On the other hand, over a hundred economic scientists have said he offers a serious and credible alternative to the destructive austerity policies of the past decades.

Given Mélenchon’s criticism of Germany’s economic policy and his support for Greece, one might expect academic support for his programme to be concentrated in the south of Europe. However, the map shows his academic supporters are also in countries like the UK and Germany.

Read more about Mélenchon’s programme here and here.

Method

I geocoded the affiliations of the list of supporters using this tool and Bing’s map api. Sometimes Bing gets the location of the institution right, sometimes it gives the location of the city where it’s located and sometimes it fails. I’ve corrected a few coordinates manually but I can’t rule out I missed any errors.

Tags: 

Reject all evidence: How George Orwell’s 1984 went viral last January

On Sunday 22 January 2017, Trump adviser Kellyanne Conway introduced the term alternative facts to justify disputed White House claims about how many people had attended Trump’s inauguration. The term alternative facts was quickly associated with the newspeak and doublethink of George Orwell’s novel Nineteen Eighty-Four. Sales of the book became ‘hyperactive’ during the following week.

I looked up some 150,000 tweets about Orwell’s ‘1984’ to see how interest in the novel developed during that week (note that analysing tweets is a somewhat messy business - see Method below for caveats).

But first, a basic timeline. On Friday 20 January, the inauguration took place. Afterwards, people started tweeting photos showing empty spots in the audience. On Saturday, the White House claimed the photos were misleading and that the inauguration had drawn the «largest audience to ever witness an inauguration». On Sunday, Conway appeared on NBC’s Meet the Press and defended the White House claim as alternative facts.

Alternative facts

The chart below shows tweets about Orwell’s 1984 and how many of those tweets specifically mention alternative facts. Immediately after Conway’s Meet the Press interview, the first tweets appeared that made the connection between alternative facts and 1984 (the green line in the chart). The real peak occured on Tuesday, when major media started to discuss the connection.

The alternative facts quote can explain some of the interest in ‘1984’, but there was also a peak in Orwell 1984 tweets even before the interview with Conway took place.

Amazon sales

Meanwhile, sales of the book ‘1984’ on Amazon started to rise. On Sunday, the day of the interview, it reached the top 20. On Tuesday, the Guardian reported it had reached number 6 and in the evening of that same day, it became the number 1 best-selling book on Amazon.

At some point, people started to discuss the rising book sales on twitter, as the chart below shows.

Tweets about sales of ‘1984’ didn’t really take off until Tuesday, and largely coincided with talk about the alternative facts quote.

Reject all evidence

That still leaves the question what the earlier Orwell 1984 tweets were about. Interestingly, almost all these earlier tweets contain the following quote from ‘1984’, which describes how the authorities redefine truth:

The Party told you to reject all evidence of your eyes and ears. It was their final, most essential command.

The chart below shows tweets containing this quote.

On Saturday evening, the White House had held its press conference at which it claimed a record number of people had attended the inauguration. The first reject all evidence tweet I could find was posted before that press conference, but the quote didn’t catch on until after the press conference. Within days, the quote was tweeted over 50,000 times.

In short, Conway’s remark on Sunday about alternative facts boosted interest in ‘1984’, but didn’t start it.

Meanwhile, the 1984 tweets probably reflect a broader phenomenon. Various media have discussed how dystopian novels like ‘1984’ are ‘chiming with people’ (get your reading list here).

Method

I used Python and the Tweepy library to search the Twitter API for orwell 1984. This method has limitations. Twitter provides a sample of all tweets and no-one knows exactly how much is missing from that sample. Further, searching for orwell 1984 may overlook tweets only mentioning orwell or 1984, or even nineteen eighty-four, as in the official book title.

The search for orwell 1984 yielded some 150,000 tweets. If the text contains both alternative and facts (this includes tweets containing #alternativefacts) I classified them as being about alternative facts; if they contain amazon or sales or bestseller or best-seller, I classfied them as being about sales. If they contain reject and evidence and eyes, I classified them as containing the quote «The Party told you to reject all evidence of your eyes and ears. It was their final, most essential command».

I used 9 am as the time at which Meet the Press was aired. For the time of the original White House claim about attendance at the inauguration, I used this recorded live feed which was announced to start at 4:30 pm; the actual press conference starts after about 1.5 hrs, i.e. 6 PM.

Tags: 

Python script to import .sps files

In a post about voting locations (in Dutch) I grumbled a bit about inconsistencies in how Statistics Netherlands (CBS) spells the names of municipalities and why don’t they include the municipality codes in their data exports. This afternoon, someone who works at CBS responded on Twitter. She had asked around and found a workaround: download the data as SPSS. Thanks!

CBS offers the option to download data as an SPSS syntax file (.sps). I wasn’t familiar with this filetype, I don’t have SPSS and I couldn’t immediately find a package to import this filetype. But it turns out that .sps files are just text files, so I wrote a little script that does the job.

Note that it’s not super fast; there may be more efficient ways to do the job. Also, I’ve only tested it on a few CBS data files. I’m not sure it’ll work correctly if all variables have labels or if the file contains not just data but also statistical analysis.

That said, you can find the script here.

Tags: 

How to automate extracting tables from PDFs, using Tabula

One of my colleagues needs tables extracted from a few hundred PDFs. There’s an excellent tool called Tabula that I frequently use, but you have to process each PDF manually. However, it turns out you can also automate the process. For those like me who didn’t know, here’s how it works.

Command line tool

You can download tabula-java’s jar here (I had no idea what a jar is, but apparently it’s a format to bundle Java files). You also need a recent version of Java. Note that on a Mac, Terminal may still use an old version of Java even if you have a newer version installed. The problem and how to solve it are discussed here.

For this example, create a project folder and store the jar in a subfolder script. Store the PDFs you want to process in a subfolder data/pdf and create an empty subfolder data/csv.

On a Mac, open Terminal, use cd to navigate to your project folder and run the following code (make sure the version number of the tabula jar is correct):

for i in data/pdf/*.pdf; do java -jar script/tabula-0.9.2-jar-with-dependencies.jar -n -p all -a 29.75,43.509,819.613,464.472 -o ${i//pdf/csv} $i; done

On Windows, open the command prompt, use cd to navigate to your project folder and run the following code (again, make sure the version number of the tabula jar is correct):

for %i in (data/pdf/*.pdf) do java -jar script/tabula-0.9.2-jar-with-dependencies.jar -n -p all -a 29.75,43.509,819.613,464.472 -o data/csv/%~ni.csv data/pdf/%i

The settings you can use are described here. The examples above use the following settings:

  • -n: stands for nospreadsheet; use this if the tables in the PDF don’t have gridlines.
  • -p all: look for tables in all pages of the document. Alternatively, you can specify specific pages.
  • -a (area): the portion of the page to analyse; default is the entire page. You can choose to omit this setting, which may be a good idea when the location or size of tables varies. On the other hand, I‘ve had a file where tables from one specific page were not extracted unless I set the area variable. The area is defined by coordinates that you can obtain by analysing one PDF manually with the Tabula app and exporting the result not as csv, but as script.
  • -o: the name of the file to write the csv to.

In my experience, you may need to tinker a bit with the settings to get the results right. Even so, Tabula will sometimes get the rows right but incorrectly or inconsistently identify cells within a row. You may be able to solve this using regex.

Python (and R)

There’s a Python wrapper, tabula-py that will turn PDF tables into Pandas dataframes. As with tabula-java, you need a recent version of Java. Here’s an example of how you can use tabula-py:

import tabula
import os
import pandas as pd

folder = 'data/pdf/'
paths = [folder + fn for fn in os.listdir(folder) if fn.endswith('.pdf')]
for path in paths:
    df = tabula.read_pdf(path, encoding = 'latin1', pages = 'all', area = [29.75,43.509,819.613,464.472], nospreadsheet = True)
    path = path.replace('pdf', 'csv')
    df.to_csv(path, index = False)

Using the Python wrapper, I needed to specify the encoding. I ran into a problem when I tried to extract tables with varying sizes from multi-page PDFs. I think it’s the same problem as reported here. From the response, I gather the problem may be addressed in future versions of tabula-py.

For those who use R, there’s also an R wrapper for tabula, tabulizer. I haven’t tried it myself.

Call tabula-java from Python

[Update 2 May 2017] - I realised there’s another way, which is to call tabula-java from Python. Here’s an example:

import os

pdf_folder = 'data/pdf'
csv_folder = 'data/csv'

base_command = 'java -jar tabula-0.9.2-jar-with-dependencies.jar -n -p all -f TSV -o {} {}'

for filename in os.listdir(pdf_folder):
    pdf_path = os.path.join(pdf_folder, filename)
    csv_path = os.path.join(csv_folder, filename.replace('.pdf', '.csv'))
    command = base_command.format(csv_path, pdf_path)
    os.system(command)

This solves tabula-py’s problem with multipage pdf’s containing tables with varying sizes.

Tags: 

Trick the trackers with a flood of meaningless data

A couple years ago, Apple obtained a patent for an intriguing idea: create a fake döppelganger that shares some characteristics with you, say birth date and hair colour, but with other interests - say basket weaving. A cloning service would visit and interact with websites in your name, messing up the profile companies like Google and Facebook are keeping of you.

I don’t think anyone has implemented it. But now I read at Mathbabe’s blog about a similar idea that actually has been implemented. It’s called Noiszy and it is

a free browser plugin that runs in the background on Jane’s computer (or yours!) and creates real-but-meaningless web data – digital «noise». It visits and navigates around websites from within the user’s browser, leaving your misleading digital footprints wherever it goes.

Cool project. However, it has been argued that the organisations that are tracking you can easily filter out the random noise created by Noiszy.

Tags: 

Pages