Turing Data Futures: AI for government policies and social justice

3 minute read

Last week I attended my second Turing lecture, and it didn’t disappoint. I heard from two incredible researchers about different uses of AI and predictive analytics in the public sphere.

Policy Priority Inference

Omar Guerrero, a computational social scientist, Turing fellow and currently working at UCL shared insights from a tool he’s developed using some pretty neat computational techniques to backtrack the effect of socioeconomic policies. Omar is quite passionate about the Sustainable Development Goals, but he realized that in many cases, the indicators used to measure progress in a country do not align with these goals and rarely measure the true effect of the policies that are being implemented. For example, a country could be investing in building a school, hoping to improve education, goal #4. In reality, however, the indicators measuring education (like the share of the population graduating or being able to read) rise because roads have been built or a bus has been reinstated and kids now take it to go to the school in the village nearby (goal #11); or a vaccine has been administered, so kids don’t miss school because they are ill (goal #3)… In other words, in different countries there are different circumstances that can lead to goals being reached, and they correlate with each other in different ways. These synergies are difficult to identify and can lead to governments investing in policies that didn’t actually produce the results they are measuring. Omar’s novel method built on earlier work he’s done with a colleague for UNDP — Policy Priority Inference — combines agent computing models, network theory and machine learning allowing for the complexities and inefficiencies of measuring progress and mapping it to government policies.

Read more about this work in his latest paper: How do governments determine policy priorities? Studying the development strategies through spillover networks and follow him on twitter @guerrero_oa.

Data Justice

Lina Dencik is the Director of the Data Justice Lab at the University of Cardiff. Her work focuses on the use of predictive analytics in public organisations. In a recent project, she and her team sent about 400 freedom of information requests to councils around the UK to find out whether they use AI-based tools in their decision making. And oh yes, they do! 53 councils aggregate data from a variety of online and offline sources, including consumer (!?!) platforms like Experian, to compute scores, categorizing citizens and making decisions on where to allocate resources based on these scores. Not surprisingly, the use of predictive analytics is driven by austerity — there are never enough people to go through and understand all that is happening, and these types of tools enable faster decision-making. But is it better? All the councils, she mentioned, are building on the narrative that it is the professional who ultimately makes the decision. How much of this actually happens in practice and how involved are we in this process, is yet to be determined. There is, though a very critical question we should always ask — what right do we have to assign scores to other people? Lina also drew our attention to the fact that we don’t even know this is happening. And we have to do something about it. She mentioned a few examples of citizen committees in the US. In Oakland, CA, the general population can decide whether certain AI-based tools should be approved for use in public organisations, like city council, policing and the legal system.

If you want to investigate the uses of predictive analytics and scoring systems in public services (in the UK), you can have a look at the tool developed by the Data Justice Lab and read their report. Follow Lina Dencik’s latest updates about her research and her fresh-off-the-press book on twitter @linadencik.