UCL Uncovering Politics

AI and Public Services

Episode Summary

This week we’re looking at AI and public services. How far could AI tools help to tackle stagnant public sector productivity? What dangers are associated with AI adoption? And how can these dangers be addressed?

Episode Notes

Artificial intelligence is increasingly being touted as a game-changer across various sectors, including public services. But while AI presents significant opportunities for improving efficiency and effectiveness, concerns about fairness, equity, and past failures in public sector IT transformations loom large. And, of course, the idea of tech moguls like Elon Musk wielding immense influence over our daily lives is unsettling for many.

So, what are the real opportunities AI offers for public services? What risks need to be managed? And how well are governments—particularly in the UK—rising to the challenge?

In this episode, we dive into these questions with three expert guests who have recently published an article in The Political Quarterly on the subject:

Helen Margetts – Professor of Society and the Internet at the Oxford Internet Institute, University of Oxford, and Director of the Public Policy Programme at The Alan Turing Institute. Previously, she was Director of the School of Public Policy at UCL.

Cosmina Dorobantu – Co-director of the Public Policy Programme at The Alan Turing Institute.

Jonathan Bright – Head of Public Services and AI Safety at The Alan Turing Institute.

 

Mentioned in this episode:

Episode Transcription

Alan Renwick: [00:00:00] Hello, this is UCL Uncovering Politics. This week we're looking at AI and public services. How far could AI tools help tackle stagnant public sector productivity? What dangers are associated with AI adoption? And how can these dangers be addressed? My name is Alan Renwick, and welcome to UCL Uncovering Politics, the podcast of the School of Public Policy and Department of Political Science at University College London.

Artificial intelligence is beginning to revolutionize the world around us. One area of positive impact is in the delivery of public services. But the track record of public sector IT transformations is not good. There are serious concerns about equity and fairness, and the prospect of Elon Musk and his friends lording it over us all fills many with dread.

What are the opportunities that the rise of AI presents? [00:01:00] How can the dangers best be addressed? A recent article in the Political Quarterly explores these questions. In our series focusing on new research in the Political Quarterly, I'm joined by the article's three authors.

They are Helen Margetts, Professor of Society and the Internet at the Oxford Internet Institute at the University of Oxford, and Director of the Public Policy Program at the Alan Turing Institute. 

Cosmina Dorobantu is Co-Director of the Public Policy Programme at the Alan Turing Institute, Turing OII Fellow at the University of Oxford, and Visiting Professor in Practice at the London School of Economics. Jonathan Bright is Head of Public Services AI Safety, at the Alan Turing Institute. Helen, Cosmina, and Jonathan, welcome to UCL Uncovering Politics.

It's fantastic to have you all on the podcast. And might we start just by saying a bit more about what this paper is that we're [00:02:00] talking about today. What question are you asking? What kinds of opportunities, what sorts of dangers should we be aware of when we're thinking about these issues?

Helen, do you want to start? 

Helen Margetts: Thank you. Before the election, public services were seen as broken in some way that they needed mending, they needed fixing.

At the same time, there was a lot of talk about fiscal prudence and not spending any more money on public services. That immediately suggests a challenge. The three of us were working in the public policy program at the Alan Turing Institute, where we try to help government make the most out of data-driven technologies like data science and AI. 

We felt this challenge was almost for us, one of the things we most often talk about when we talk about these technologies is productivity and the possible gains. That seemed a way around wanting to have better public services but not wanting to spend any more money.

Alan Renwick: And that's what this article, we wrote it to tackle that challenge. 

Cosmina, do you want [00:03:00] to add to that? 

Cosmina Dorobantu: It has been exciting to think about the opportunity. Productivity is a longstanding hope that technology would raise productivity.

The public sector in the UK employs about six million people and is responsible for about 20 percent of the country's economic output. Productivity gains can scale up and make a huge difference. We considered how we can increase productivity with those technologies in the public sector. 

We thought about the greatest harm in productivity, and the greatest harm is growing inequity. Therefore, we present a compelling argument in the article. Productivity should not be the only focus, and the work on productivity and equity must go hand in hand. The public sector has a responsibility to prioritize both.

Alan Renwick: We'll say more as we go on about the nature of equity and the equity challenges associated with these policy choices. Jonathan, do you want to reflect on starting points for this work and why you got going with it?

Jonathan Bright: Absolutely. I think this is an auspicious [00:04:00] time to be thinking about digital transformation within government, all three of us have been doing this for at least ten years, and the landscape of digitization in government has changed a lot.

We're seeing government doing more in-house, being optimistic and bold about the arrival of generative AI, now a consumer-facing product for more than two years, presents real benefits, to be seized. It's not complicated to implement these technologies now. We're seeing a wave of experiments, pilots, proof of concepts. This is the right time to be thinking about this sort of stuff.

Alan Renwick: Helen, as Jonathan was pointing out, this paper is building on work that the three of you have been doing for quite some time. The paper sets out conclusions and recommendations. It would be great to hear more about the foundations the paper is built on.

Helen Margetts: I've been working on the relationship between technology and government since the 1990s when nobody else was interested in it. But of course, then along came the internet and now there's AI.

Everybody's interested in it, which is great because you get invited to [00:05:00] be on podcasts. 

Alan Renwick: How did you first get interested in this? 

Helen Margetts: Sure. I've had a bit of an eclectic, I think is the word, career. My first degree is in mathematics.

I had an earlier career in the private sector as a computer scientist, I then went and did a master's degree and a PhD at the LSE, and I was studying government, studying public administration, studying public policy, but I couldn't help noticing that nobody ever mentioned any computers.

I thought, that's interesting because there must be some. And of course there were. That's what I wrote my PhD about. It prompted me to look for them and discover that government was struggling with digital technology, struggling to make the most out of these technologies in exactly the kind of ways we talk about in the paper.

I was very interested, and I wanted to investigate more thoroughly. Because of my first degrees in mathematics, I have always been interested in data and numbers. When the big data revolution came along, so [00:06:00] called in the 2010s and the government of the time was talking about setting up the Alan Turing Institute, I was excited.

About that, there should be a bit about the National Institute. Thinking about how these technologies could be used in government to help the government have a better experience with the latest technologies than with earlier generations.

I got involved with the Turing Institute at the beginning and set up or we've got some budget to start to set up the public policy program.

Alan Renwick: Does one of you want to explain what the Alan Turing Institute is? It's a national institute that focuses on these issues? I mean, it's, it's got an ac.uk web address. So, it's a kind of academic institution, but in a sense, it's quite different from a standard academic institution. 

Helen Margetts: It was a partnership between the government and five universities, Oxford, UCL, Edinburgh, Cambridge, and Warwick.

The idea was just, remember at the time, was to use these technologies to make the world a better place more generally. It wasn't just about government. [00:07:00] Of course, that's a bit that we work on.

But it's also about thinking how you tackle sustainability challenges and health challenges and all the challenges that we face and in the modern world. 

Alan Renwick: And Cosmina, what have been the issues that you've focused on in your work? 

Cosmina Dorobantu: I started my career at Google and at Google, I had access to their amazing databases. And of course, I was using them to analyse search traffic volume, advertising revenues, etc. But the entire time I was there, I was thinking there's so much you can do with this data if you just try to answer some societal or economic questions.

So, I went on to do a PhD using data from online platforms like Google to understand things about digital trade or online auctions or market mechanisms just using those vast data sets. When I finished my PhD, I thought, this is what I want to do. I want to put this data and technologies in the service of the public sector.

Helen had the same idea. We met, set up this program of work. And when we [00:08:00] started we went around and talked to hundreds in government, asking what do you need from the National Institute? What do you need help with? And they returned hundreds of questions, but they mostly fell within four categories.

One was, can you help us model the world better? The economic and social models don't allow us to estimate the impact of policies and so on. Can you do better when it comes to evidence-based policy and modelling? So that's mostly a data science question. The second category was public services.

Would you be able to build machine learning tools to improve public services? The third question was, should we be doing this work? What are the ethical implications? Should we build a machine learning algorithm that identifies the children that should end up in care?

And that set of questions is about ethics and responsible innovation. The fourth set is, how should we regulate them? What should we do about them? So once we had these sort of four categories of questions, we set up a team of [00:09:00] researchers that addressed each one of them. And we've worked with more than 100 public sector organizations in the last seven years from, local authorities to international organizations like UNESCO and the Council of Europe.

Alan Renwick: That background is helpful. Thank you. Let's get on to the paper. The starting point is that in theory, as you just said, there's much potential for improvement through the adoption of AI. But the history of adoption of AI has been quite problematic. Certainly, in the UK, sorry, adoption of IT, I should say, rather, in the UK public services.

Jonathan, do you want to pick that up and say a little bit about what sorts of problems have been encountered? What are the reasons for failure?

Jonathan Bright: Yeah, the history of tech adoption in the public sector has been littered with challenges. Government is the most complicated institution we have.

It's way more complicated than any private industry. A local government organisation, depending on how you count it, might do up to a thousand different things, a thousand different services. Very [00:10:00] unrelated stuff from, taking care of our elderly relatives to taking out your bins and stuff like that.

These are complicated organizations, and it is hard to do tech change correctly in these organizations. I think that's an important starting point. Nevertheless, it is the case that we've seen notable failures in the past. 

If bias, fairness and accuracy aren't considered at the beginning of these processes, then we can often get into situations where we're very unable to justify the outcomes of what algorithms are producing. I think the A Level results during COVID is a good example of that.

Jonathan Bright: If we start with the point of view that technology is always going to work, and that we don't plan at all for failure, or even think about that failure being possible, then we get situations where when the technology does fail, then we ignore it and double down. And if you look at what happened at the post office, I think that's a good example of that as well.

I think there are lessons from the last 10 years of [00:11:00] tech adoption in the public sector to help do things better. But it is difficult. It is challenging. 

Alan Renwick: And one of the things I find quite interesting in the paper is that you suggest that one reason that these sorts of failures have happened is structural, it's to do with excessive outsourcing of these kinds of activities in the public sector such that there's just a lack of internal know how.

Is that fair?

Helen Margetts: Yes, that is fair. If you think that the time that computer technologies entered government, it was during the 1970s and 1980s. The way we were thinking about government particularly when it came to the Thatcher era was about government doing the things it was good at, and not doing the things that it wasn't good at, and bringing that in, buying that expertise in, and of course, big technology projects is not something that government has traditionally been thought of as good at.

The government was about bureaucracy and policymaking. Civil servants avoided [00:12:00] being responsible for technology. The more the public sector got a reputation for failed projects, the more people didn't want to be associated.

Of course, the more the market grew of very large global computer services providers who offered to do all this for the government. And they built large, bespoke systems, which were not only not innovative, but they blocked innovation because they locked government in siloed systems, which didn't take the opportunities of data, but just automated what there was there already, just kind of paper-based records and files. 

Yeah, that has been that's been a problem throughout the history of government computing. 

Alan Renwick: Yeah. And you have some interesting thoughts in the paper around how AI might be a bit different and work a bit better, which we'll get to in just a moment. But before we get there, it would be useful just to reflect a little more on what does better actually mean?

What does [00:13:00] good mean in this context? What does success look like in this context? And we've talked a bit about productivity. We've talked about equity. Cosmina, do you want to dig a little bit further into that? What are the criteria of success in AI adoption?

Cosmina Dorobantu: Sure. Just to get back a little on the data point.

Whenever we talk about data, we look with great envy at online platforms and say they have all the data. But actually, there's another massive holder of data, which is the government itself. We, the government, hold so much data from all the transactions it conducts daily with its citizens.

Now the issue with this data is that it's not being used, it's not being collected and stored in a way it can be used. Although there's lots of information in that data. Success for government would be to make use of all the transaction data it holds, not just to gain insights about a given individual, but to gain insights about trends and societal and economic [00:14:00] issues.

So, I think, capitalizing on the wealth that it sits on would be success. 

Alan Renwick: You talk about different kinds of productivity. Services, productivity, and policy productivity. And I guess services productivity is something that lots of people might've thought about just in terms of automating routine decisions that are constantly happening.

And you have a number in the paper, I think there were a billion kind of little decisions made all the time, every year in the UK public sector. But you also talk about policy productivity and the idea that we can use AI to more intelligently craft the policymaking and the allocation of resources to different budget streams within government. 

Cosmina Dorobantu: That's right. When we talk about public services productivity, the dream of officials is to automate entire processes. And just to replace people altogether and just have a machine do all the work.

And I think this is where we see poor design and deployment of AI in the public services. [00:15:00] So one of the things that we've been trying to do, and Jonathan's team has worked quite a lot on this, is trying to convince the government that's not where the potential is.

The potential is in automating those micro transactions. So, if you think about getting a passport, for example getting from, the application to the actual passport, there are a lot of micro decisions and transactions like checking a photo or text. Central government conducts about a billion of those micro transactions. And out of this billion, about 120 million are highly automatable with AI. But those are the small things, like I said, checking a photo or so on. The issue is that's where the potential is, but this isn't what dreams are made of.

Getting government excited about automating ultimately the minutia of the, its bureaucratic processes is not something that you're going to have a huge press announcement about. But it could bring so much in terms of productivity gains. Those are the low hanging fruit. And on the policy side we have about 52,000 budget lines [00:16:00] in the UK the last time I looked, which means there are about 52,000 policy decisions underpinning them. These policy decisions ultimately decide how more than a trillion pounds of public money get allocated each year. If you have a way to optimize that allocation to make it work for you better, that will be incredible. Our models have been bad at is capturing the complexity of our world and capturing interdependencies. So, if you invest in health, for example, you might get better educational outcomes as well, but that's not captured in our models. So, it's impossible to arrive at an optimal budget allocation if you don't know what those sort of spillover effects is. And I think with data science methodologies, we have ways to capture those spillovers and interdependencies.

One of the arguments that we're making is that, you can allocate a trillion pounds better. If you understand the world better. 

Alan Renwick: Yeah. So those are the opportunities, the promise. Just going back, Helen, to [00:17:00] what you were saying about how things have gone wrong in the past, and the paper suggests we should be more optimistic we can avoid the problems of the past and capture these opportunities.

What are the, in the case of AI, what are the reasons for thinking that?

Helen Margetts: One is that we've learned lessons from the past. I'm sure we're going to move on to it, but the AI opportunities action plan that the government have just produced is accompanied by a report that is entitled the state of digital government report.

And that report takes a frank look at the past. In fact, it comes to many of the same conclusions that I came to in my own PhD in terms of some of the things that we were talking about earlier, particularly in terms of use of consultants instead of public sector staff, the inadequacies of digital government in the UK.

It's a frank and honest report, I recommend everybody read it or to have a look at it to understand the situation. So that's good. [00:18:00] We know where we went wrong. These technologies require public servants to be a bit more hands on.

You can't just buy them in a box. There are all sorts of issues with buying these technologies in that kind of way. That does mean that there must be a bit more thinking about how they're going to help with policy making, how they're going to help with services in the way that Jonathan's team have been doing.

That's what we've been trying to do at the Public Policy Programme, to help government think about that. And I think you see a lot more of that thinking going on. The other thing, of course, is that it is the advent of generative, so-called generative AI and large language models.

I might hand over to Jonathan there because he's done some interesting work on that. 

Jonathan Bright: Sure. So, we're talking about reasons to be optimistic this time might be different. We need to be aware of past reasons for failure. Also the fact, and this again is mentioned in this, the report that Helen just flagged and [00:19:00] came out through the INS last year, the fact that past waves of digitization, even when they have been successful, bringing in email or cloud services or just generally getting government online, haven't necessarily translated into productivity gains.

We see by some measures overall public sector productivity flat or even falling over the last 20 years. This is the period we saw the introduction of digital technologies, we now take for granted. We do need to be aware of that and conscious of not only doing tech projects well, but then realizing the gain of those projects afterwards.

But there is another reason for optimism in generative AI, which is what Helen prompted me on, is the fact that people within the public sector are very enthusiastic about using these tools. And again, that's a notable difference to what's gone before in digital technology.

Most of the time, when people in the public sector, if you think of a doctor, or a teacher, or a police officer, or something like that, they're provided [00:20:00] with a management information system, which helps senior management doing some compliance tasks. But to that doctor or social worker is just another form to fill in while they're trying to get on with their day.

Generative AI is a bit different to that. It seems to be a tool that helps them get on with their day and perhaps fill in some of the forms that the other digital technologies have created. And that's why in some of the survey work we've seen lots of optimism about AI in public services.

We just finished a major survey of the medical profession, and a strong majority is saying they're optimistic about the integration of AI systems. Very few people are saying it's being deployed before it's ready. Most people say AI is underexplored in these kinds of things. So again, that's, I think another reason for optimism.

Alan Renwick: Yeah, that was an idea that was coming across to me quite strongly in the paper that past IT failures have often been products of big IT projects, top down projects, whereas there's more scope for bottom up development of new ways [00:21:00] of working in this case, and that seems much more kind of optimistic and positive about what might happen.

But there are dangers as well, and you also highlight the dangers, particularly the dangers of inequality and exacerbation of existing inequalities from these technologies. Cosmina, do you want to tell us a little bit about the sorts of inequality-related problem that you think we need to be aware of?

Cosmina Dorobantu: Sure. You can have AI powered public services, for example, but you need to make sure that everybody can access them and everybody can use them. So, you need to make sure that everybody has the skills and the device and the connectivity to be able to take advantage of them. And we know that the numbers, the number of people who do not have that access is in the millions.

One of the things that we're suggesting in the paper is the creation of a digital inclusion unit to make sure that everybody can access whatever new version of public services we're going to be [00:22:00] building. We also know that these technologies produce biased outcomes. And they risk replicating the biases that we've had in our criminal justice systems.

We recommend making fairness and bias a, a top priority. And the other thing that I would mention is that the public absolutely need to be involved in the design, development and deployment of these technologies. For AI to add billions to our economies, we need to have the public come along with us, public trust is huge. And we have seen a marked decline in public trust here in the UK from about 3 percent in 2022 to about 38 percent in 2023. And we've done work at Turing on participatory equity science to get citizens involved and interested and engaged with technologies.

Alan Renwick: Yeah, I love this idea in the paper of conversational consultation and the idea that you can have AI tools that enable people to feedback on [00:23:00] how they're experiencing public services. And then there could be a bit of a conversation going on. Helen, do you want to develop that? 

Helen Margetts: It comes from the things that generative AI allow you to do.

The question of consultation has been that government's always been really bad at consultation. They put out a public consultation, they, get a few thousand responses, some very clever civil servant looks at them, maybe writes a report and then that's it. Whereas now there is a possibility to design something much more responsive. I'm going to ask Jonathan to explain that because he's doing it.

Alan Renwick: Jonathan?

Jonathan Bright: I wouldn't say that we've done it yet, but one of the promises of generative AI, and this is also the way people are starting to interact with its conversational style, it can ask a question, you respond, and follow up. That interaction is missing in most of the ways that citizen thoughts and advice and opinions are solicited by government, [00:24:00] largely through practicality reasons, right?

Even if you submit to a consultation, you don't know if your voice is being heard, how it's being heard, how it relates to other people. And so, one of the promises here is the potential for a system which improves on that a little bit and allows people to experience interaction while contributing to consultations. 

Alan Renwick: Just going back to inequalities, so Cosmina talked about a couple of types of inequality that we might be concerned about here. One to do with the fact that some people just don't have ready access to the internet and materials, and it's important to enable people to do that.

Another point about the biases that AI can replicate, the biases that exist within society, that AI has a habit of replicating. Another source of inequality in the paper is just earnings inequality, and you talk about how there's research that's been done in the United States that suggests that the adoption of new technology has been an important source of increasing [00:25:00] inequality in society and wages back to the 1980s.

There's a danger that might be advanced further as a result of AI adoption. And I guess there's also the concern I was slightly cheekily flagging in the introduction. Shortly after Elon Musk and his helpers seem to have been doing, people are talking about the takeover of bits of the American government.

There's concern that power in society is being taken into the hands of a few people who run big tech companies and the rest of us are going to be pawns. What would you say in response to those aspects of inequality?

Helen Margetts: I'll say something about the second of them. I think we do have to think about here. So, we've pointed out some advantages of generative AI and of public servants using it. But the thing is, these companies, technologies are developed by massive companies and will bring a kind of value shift into the public sector.

It's certainly the case that any new kind of [00:26:00] trends and patterns of technology usage or ways of doing things such as public sector practices like accounting, for example will be coming via those technologies. Really, we're going to be getting change that comes through the technologies.

We've not had that before exactly because public sector technology was distinct. It wasn't really commercialized in that way. I think there are real questions about who's in charge there. I mean, that, that does apply to some extent to society more generally, if you think back all that time ago to like the end of 2022, GPT was released onto society. Without warning. These language models have very powerful implications for all kinds of societal aspects. And yet, they've come from technology companies unmediated by any kind of decision making.

So that is something that we're going to have to think about as a society going forward. 

Alan Renwick: [00:27:00] Cosmina?

Cosmina Dorobantu: I agree that the technology lends itself to the concentration of market power in the hands of a few companies. And that's something that we're going to need to think about.

The other thing that we're going to need to think about quite deeply is who's getting left out of this AI revolution. And that's an area of work that is in dire need of more research. We have a tiny project at the Turing that looks at women in data science and AI, and that tries to understand just the way in which women have been left out of this AI revolution. We find that among the universe of data science and AI professionals in the UK, women make up about 22 percent of that workforce. We find that when you look at venture capital investments, female-led AI startups attract only 0.8% of venture capital investment, 136 million pounds versus 13.5 billion raised by men. Moreover, 23 percent of women in the UK feel comfortable expressing [00:28:00] political opinions online. So, 77 percent stay quiet, and that has huge implications for our democracies. I feel like, these stats are generated by a team of two or three researchers at the Turing, but so much more needs to be done just to bring those stories to life and to give our governments a chance to understand, and to decide whether they're comfortable with the situation or not. 

Alan Renwick: So just summing up the recommendations that come out of this paper. So you've talked there about the importance of being very conscious, and government needs to be conscious of fairness and bias, support research in these areas as well.

You've talked about the need for work around digital inclusion. We've talked about improving the ways in which we do consultations and the ways in which citizens can interact with services. I guess another kind of broader recommendation that you talk about in the paper around developing what you refer to as a pro human approach to the adoption of AI [00:29:00] is the idea that the focus needs to be on making public services better rather than just efficiency.

Jonathan, do you want to pick that up at all? 

Jonathan Bright: I've always been a bit sceptical about purely presenting technology, AI, data science in terms of its ability to save money. I don't think it's a winning argument necessarily with the public. I think they see that as an excuse to do public services on the cheap.

And a lot of the pushback that we see from on the adoption of technology is precisely because you perceived that you're not getting the full service. I'm getting the robots that are trying to fog me up with some cheap answer. I also think there's lots of areas where AI will have massive benefits, but they don't necessarily realize themselves.

Look at, teaching profession average teacher working more than 50 hours per week. You have a turnover of 10 to 20 percent of people leaving each year. And that of course creates huge costs for the education system.

You need to replace those people interruption of education outcomes, et [00:30:00] cetera. If we can bring in AI tools that reduce that working week to a normal working week, like what they should be doing considering what they're being paid for that, that could have huge potential benefits for the quality of teaching, for the job satisfaction and retention.

If you translate that into job cuts, then you'll just have a smaller number of teachers working the same number of hours quicker, using AI. And that example is repeated across the public sector. In policing, medicine, asylum, for example, a huge pain point on one thing that's never talked about is the difficulty of attracting and retaining asylum processing officers, and probation work.

So, I think there's lots of areas where AI, the first thing it will do, hopefully, is improve outcomes or improve the way people can do work, not necessarily save money. 

Alan Renwick: In terms of recommendations, going back to what you were saying, Helen, about the lack of in-house capacity, you also argue that to deliver on these various things that [00:31:00] Cosmina and Jonathan have talked about. It's important to reverse that trend towards outsourcing AI and tech, generally, expertise and have more, in-house expertise in government.

Helen Margetts: There'll still be a major role for private sector companies. One important lesson from the past is you can't outsource thinking. Government needs to think about what can be done with these technologies and how they can be used in innovative ways to make people's lives better, which is, and keep them safe, which is after all, what public services are about.

And there is tremendous. possibilities for these technologies to do that, but it needs thinking about, it needs thinking about incorporating them into services and working practices. 

Alan Renwick: Final question to each of you. We're pretty much out of time, but just very final question.

How's the UK government doing so far? So, we've set up, you've set up recommendations, things that ought to do, what can we say about how it's performing so far. Who wants to pick that up? Helen, do you want to? 

Helen Margetts: One of the exciting things about writing this [00:32:00] article was that we had several recommendations and then, of course, the government had been elected, and they started doing some of the things that we were recommending.

So we had to change the end of the article a couple of times before it was published. They have produced an AI opportunities action plan, and it is good to see this emphasis on the positive in that action plan. Over the last couple of years, there's been a real tendency to veer towards kind of the risk of AI.

We've heard over the last couple of years, a lot of terrible stories about how robots are going to turn into humans and then destroy humanity. Only not a lot of emphasis on that safety element of AI, so it's good to hear it promoted as positive for government.

Alan Renwick: Do you want to say anything more about what's been happening in government over the last few months? 

Cosmina Dorobantu: The we've seen the publication of the state of digital government review, which is a brilliant piece of work. Helen said earlier, very honest. Look at government. I really [00:33:00] was deeply, deeply impressed. We've also seen the blueprint for modern digital government, which saw the launch, for example, of the digital centre of government, we argue for paper. I'm super pleased to see the various bits of government being all united under the same front.

 Lots of exciting things happening, lots of potential. The big question looming over all of this is where the funding is going to come from and how much funding is going to go to this. It could save money down the line but costs a lot to get right at the beginning.

Alan Renwick: Jonathan, do you want to add a final word? 

Jonathan Bright: Yeah, I think that's right. One of the things that report says, as Cosmina mentioned, was that there's about 26 billion spent on all public sector digital and data. That's enough to maintain systems often in great difficulty.

We see big headline numbers for savings, but how much money will there actually be to implement these things is a big question. 

Alan Renwick: Thank you. This has been an excellent conversation. We've learned a great deal. You've given us many reasons for optimism.

People often [00:34:00] fear a lot what might be coming. So, optimism is healthy. But you've also pointed out dangerous challenges where vigilance and attention are still needed.

We have been discussing the article How to Build Progressive Public Services with Data Science and Artificial Intelligence by Helen Margetts, Cosmina Dorobantu and Jonathan Bright, published in the Political Quarterly towards the end of last year.

We'll put full details in the show notes for this episode. Next week, UCL Uncovering Politics will be taking a little mid-winter mini break, but we'll be back in two weeks with another episode on barriers to disability representation in politics.

The Political Quarterly is a journal that seeks to make the latest research about politics and policy accessible and relevant to wider audiences. So it's well worth exploring. To make sure you don't miss out on future episodes of UCL Uncovering Politics.

You can do so on Apple, Google Podcasts, or whatever provider you use. And while you're there, I'll see you next time. We'd love it if you could take a moment to rate or review us. I'm Alan Renwick. This episode [00:35:00] was produced by Eleanor Kingwell Benham and Kaiser Kang. Our theme music by John Mann.

This has been UCL Uncovering Politics. Thank you for listening.