Migrant rights campaigners called for the Home Office to withdraw the system, claiming it was “technology being used to make cruelty and harm more efficient”…reports Asian Lite News
A Home Office artificial intelligence tool which proposes enforcement action against adult and child migrants could make it too easy for officials to rubberstamp automated life-changing decisions, campaigners have said.
As new details of the AI-powered immigration enforcement system emerged, critics called it a “robo-caseworker” that could “encode injustices” because an algorithm is involved in shaping decisions, including returning people to their home countries.
The government insists it delivers efficiencies by prioritising work and that a human remains responsible for each decision. It is being used amid a rising caseload of asylum seekers who are subject to removal action, currently about 41,000 people.
Migrant rights campaigners called for the Home Office to withdraw the system, claiming it was “technology being used to make cruelty and harm more efficient”.
A glimpse into the workings of the largely opaque system has become possible after a year-long freedom of information battle, in which redacted manuals and impact assessments were released to the campaign group Privacy International. They also revealed that people whose cases are being processed by the algorithm are not specifically told that AI is involved.
The system is one of several AI programmes UK public authorities are deploying as officials seek greater speed and efficiency. There are calls for greater transparency about government AI use in fields ranging from health to welfare.
The secretary of state for science, Peter Kyle, said AI had “incredible potential to improve our public services … but, in order to take full advantage, we need to build trust in these systems”.
The Home Office disclosures show the Identify and Prioritise Immigration Cases (IPIC) system is fed an array of personal information about people who are the subject of potential enforcement action, including biometric data, ethnicity and health markers and data about criminal convictions.
The purpose is “to create an easier, faster and more effective way for immigration enforcement to identify, prioritise and coordinate the services/interventions needed to manage its caseload”, the documents state.
But Privacy International said it feared the system was set up in a way that would lead to human officials “rubberstamping” the algorithm’s recommendations for action on a case “because it’s so much easier … than to look critically at a recommendation and reject it”.
For officials to reject a proposed decision on “returns” – sending people back to their home country – they must give a written explanation and tick boxes relating to the reasons. But to accept the computer’s verdict, no explanation is required and the official clicks one button marked “accept’ and confirms the case has been updated on other Home Office systems, the training manuals show.
Asked if this introduced a bias in favour of accepting the AI decision, the Home Office declined to comment. Officials describe IPIC as a rules-based workflow tool that delivers efficiencies for immigration enforcement by recommending to caseworkers the next case or action they should consider. They stressed that every recommendation made in the IPIC system was reviewed by a caseworker who was required to weigh it on its individual merits. The system is also being deployed on cases of EU nationals seeking to remain in the UK under the EU settlement scheme.
Jonah Mendelsohn, a lawyer at Privacy International, said the Home Office tool could affect the lives of hundreds of thousands of people. “Anyone going through the migration system currently has no way of knowing how the tool has been used in their case and if it is putting them at risk of wrongful enforcement action,” he said. “Without changes to ensure algorithmic transparency and accountability, the Home Office’s pledge to be ‘digital by default’ by 2025 will further encode injustices into the immigration system.”
Fizza Qureshi, the chief executive of the Migrants’ Rights Network, called for the tool to be withdrawn and raised concerns the AI could lead to racial bias. “There is a huge amount of data that is input into IPIC that will mean increased data-sharing with other government departments to gather health information, and suggests this tool will also be surveilling and monitoring migrants, further invading their privacy,” she said.
ALSO READ: Ishaq Dar calls on Trump admin to work for peace in West Asia