Trust instead of algorithms

16 September 2018

UPDATE 11 March 2019 - A survey by Dutch trade union FNV found that 25% of participating municipalities apply algorithms to personal data to label welfare recipients as potential frauds. 9% said they hire a commercial organisation to do the analysis. FNV’s vice president Kitty Jong condemned the practice.

A number of Dutch cities have contracted a company named Totta data lab to predict which welfare recipients may have committed fraud (the cities were somewhat secretive about this approach, but newspaper NRC wrote about it last spring). Totta has trained algorithms on a considerable amount of personal data: 2 to 3 hundred variables over a period of 25 years.

Such analyses carry the risk that existing biases are reproduced:

Luk [A Totta spokesperson] says that in some municipalities more fraud is found among people who have a partner (e.g., they don’t report income), whereas in others it is people without a partner (failing to report they live together). «But it’s quite possible that only that group has been investigated and we build our algorithms on that.»

Luk says they sometimes add ‘deviant’ citizens to the suspects, apparently in an attempt to look beyond the usual suspects.

Another problem is the lack of transparency regarding how this type of algorithms work. Totta doesn’t disclose its algorithms because it wants to protect its business interests; further, it can be difficult to interpret and explain how algorithms work. As a result, the government is unable to explain what criteria it uses to prepare decisions that affect citizens. Recently, the Dutch Council of State expressed concerns over digital decision-making by the government.

Proponents of algorithms argue that they help to detect more fraud while reducing the burden for innocent citizens. In fact, there may not be such a clear distinction. The organisation of welfare agencies said that alleged welfare frauds are often people who mean no harm, but who get into trouble as a result of complex and ambiguous welfare rules.

Still, Amsterdam city council member Anne Marttin (VVD) finds the approach interesting. She asked if Amsterdam uses algorithms and data mining to detect welfare fraude. The answer is no. This is why:

The city government is aware of the use by other municipalities of algorithms and/or data mining to fight welfare fraud. The city does not use such instruments to deal with or prevent welfare fraud. […]

Our services for welfare recipients are based on trust. Further, the city government attaches great importance to the privacy of citizens and the way in which their data is used by the government, for example to develop algorithms. The city government thinks it’s very important that the use of data mining and algorithms doesn’t have a negative impact on the privacy and the legal protection of citizens.

Source (pdf)

16 September 2018 | Categories: amsterdam, data, privacy | Nederlands