Skip to Content

Automating Administrative Law’s Future: The Immigration Story

Canadian immigration authorities are using automated decision-making systems. What are their impacts?

A maze of pipes with lights on the ends depicting auto decision making.

Over the past decade, Canadian immigration authorities have been quietly and rapidly developing, testing, and deploying various automated decision-making systems (also known as “algorithmic decision-making systems” or “ADMs”). These ADMs can augment or replace the judgment of human decision-makers, with or without the use of artificial intelligence.

The Use of ADMs in Canadian Immigration and the Impact on the End-User

As immigration and refugee law practitioners, we have seen the diffusion of ADMs through the government institutions that our clients frequent: beginning with Immigration, Refugees and Citizenship Canada, then the Canada Border Services Agency, and now the Immigration and Refugee Board of Canada. Hundreds of these projects exist in draft form, but very few are brought to the public’s attention until they are deployed. Examples of ADMs deployed by immigration authorities include the use of advanced analytics to triage temporary resident visa applications and the Integrity Trends Analysis Tool, which mines data and extracts risk and fraud patterns from immigration applications. The recently introduced ReportIn application allows migrants facing removal from Canada to remotely check-in with border services officers through the use of facial recognition technology and geolocation software instead of a phone call or attending an in-person meeting at a Canada Border Services Agency office. The use of these tools is being scaled up as staff cuts and bureaucratic reorganizations are occurring.

With Canada’s — and arguably the rest of the world’s — shift toward immigration enforcement (and even more broadly, technological surveillance), it can be easy to overlook the experiences of ADM end users in favour of the government’s risk-mitigation and efficiency narratives. These ADM tools and technologies are built on and impact vulnerable individuals who are least able to cope with and contest their use precisely due to the low-rights, high-volume backdrop against which they are being deployed (see for example Automating Inequality: How High Tech Tools Profile, Police, and Punish The Poor by Virginia Eubanks). Not only do these systems lack effective regulatory safeguards, but their lack of oversight empowers further experimentation and potential misapplication(s). As Petra Molnar sets out in her fantastic debut book, The Walls Have Eyes, this is also part of a global trend, with much of the emerging technology hidden in the black box systems of third-party tech companies and targeting refugee and displaced populations.

Litigating Against the Machine

If there is one lesson from our observations about the use of ADMs in the immigration space, it is that ADMs will spread to other areas of administrative and public law. As a result, lawyers should be aware of the challenges of litigating cases where these systems are used or alleged. Unfortunately, due to factors such as proprietary opacity, national security, privacy, and “gaming the system” concerns — transparency and public awareness has taken a backseat.

One challenge is the lack of disclosure and notice given by government departments about how these tools and technologies work. In immigration, a broad legislative mandate under s. 186.1(5) of the Immigration and Refugee Protection Act, coupled with soft-law directives like the Directive on Automated Decision-Making, put very little onus on decision-makers to provide procedural fairness to applicants claiming the denial of a benefit. The strategic use of the tool to automate eligibility for “low risk” cases and requiring a “human in the loop” to review adverse findings have further insulated inquiries into the explainability of these tools.

Currently, besides litigation, the only way to learn about the different systems at play is through Access to Information and Privacy requests (ATIPs), which often come back heavily redacted. Additionally, ATIPs for research or non-client-specific purposes are also often delayed, with requesters waiting up to several years before the information is released (by that point the results of the ATIP are potentially redundant, as the ADM tools may be updated or replaced). A central effort of our own advocacy efforts, through a non-profit organization called Artificial Intelligence Monitor for Immigration in Canada and Internationally, has been to educate the public by releasing and analyzing available material and data in hopes of securing greater transparency from the government.

The Human Rights Discourse

Another challenge is the increasing discourse around rights (including a right to a human decision), in ways that engage the Charter. What section(s) of the Charter is best suited to address the harms caused by ADMs? This is tricky in immigration, as many of the originating applications are from applicants outside Canada and to whom the Charter does not apply.

Canadian immigration, overall, has been an arena where Charter rights are treated exceptionally, with efforts made to exclude from protection and point to safety valves and other procedural guardrails.

How ADMs will be able to consider Charter values and navigate around the type of discrimination claims that can derive from biased data, human-infused bias, and biased systems and processes will be important to track in the coming years.

Convincing Judges to Engage

A third challenge is judges’ apparent hesitation to engage with ADMs’ impact on decisions and on decision-makers more generally. Take the case of Mehrara v. Canada (Citizenship and Immigration), 2024 FC 1554, where the Applicant adduced a 1000+ page affidavit about Immigration, Refugees and Citizenship Canada’s use of Chinook, a processing tool for temporary resident applications that displays client information in a visually assistive manner.

Notwithstanding the multitude of possible concerns with Chinook (case annotations are generated through advanced analytics, refusal reasons are pre-generated, and the spreadsheets that officers use are deleted at the end of each day), the Court was not willing to displace core administrative law principles to find concerns with the procedural fairness of the use of this decision-making aid.

Questions to Answer: How Will Administrative Law Adapt?

These challenges aside, key questions remain regarding the common law itself and as Baker’s procedural fairness framework and Vavilov’s standard of review and reasonableness framework are reconciled with these new technologies:

How will courts reframe core administrative law principles to account for the use of ADM tools and technologies in decision-making?

  • What is the appropriate standard of review for ADM decisions?
  • What does a reasonable ADM decision look like?
  • What does procedural fairness look like in ADM decisions?

ADM is Changing the Rules of the Road

What is clear is that the use of technological tools is migrating to other spaces within the administrative state. These include contested spaces, where the persons impacted may not have capacity or access to the same knowledge or understanding of these technologies. The algorithms themselves, incorporating unsupervised machine learning and deep learning tools, are becoming more complicated and more opaque. Front-line decision-makers are being replaced with office-level bureaucrats. Judges are being asked to weigh in on data and technology issues. Lawyers are expected to become experts on tools they have little information about.

As we continue to consult, advocate, and litigate in this space, it is becoming abundantly clear that as a Bar, and legal profession, there is much work to be done.