Toronto Metropolitan University's Independent Student Newspaper Since 1967

A woman smiling.
Courtesy of Hilary Evans Cameron
All Business & Technology

Ryerson and UofT professors are researching how AI could change refugee law

By Minh Truong

Ryerson law professor Hilary Evans Cameron is working on the improper denial of refugee claims in Canada and recently discovered that artificial intelligence (AI) can help to avoid these mistakes in the future. 

Evans Cameron wrote Refugee Law’s Fact-Finding Crisis: Truth, Risk, and the Wrong Mistake in 2018, which proposes a new legal model in refugee decision-making. The book discusses how there is a problem with fact-finding in the Canadian refugee system.

In criminal law, the wrong mistake is to convict an innocent person, according to Evans Cameron. In the context of refugees, it is denying protection to people who actually need it. 

“It is better that we grant status to 10 people who don’t need it, than deny it to one person who does,” argues Evans Cameron in her book.

This is known as decision theory—the study of a person’s choices, and how Evans Cameron and University of Toronto professor Avi Goldfarb found that their work overlapped. 

“These decision makers have to read all of this information and come to a conclusion, but they’re human,” Evans Cameron said. “And we know enough about human reasoning to know that it’s fallible, that they will make mistakes.”

There is often no monitoring or recording of the information of the refugees when their claims are denied, according to Goldfarb in an interview with the University of Toronto.

“What I had been arguing in the book is that right now our law is not clear,” said Evans Cameron. “There’s a split in [refugee] law, which means that the way refugee status decision-making happens right now, board members can decide whether they would rather have their doubts, help the claimant or hurt them.”

Evans Cameron argues that these lapses in judgement and human errors are a problem in the refugee law system and that decision makers are more confident about their accuracy in the decisions than they should be.

“We realized we could argue that we could make refugee status decisions better,” Evans Cameron said. “If the law changes the way I would like it to change, it would allow the kinds of technologies that Avi [Goldfarb] works with to make a positive difference.”

Where AI comes in

Goldfarb, who works on AI and AI prediction, saw that his and Evans Cameron’s work could come together to change refugee law. With the help of machine learning, AI can play an assistive role in evaluating refugee claims via statistical decisions. 

To reduce human errors in refugee claimants, Evans Cameron suggests that the help of AI could reduce the “overconfidence” of a decision maker. The technology can show their bias and that their decisions are not as accurate as they think, thus forcing them to be more careful in their decisions and avoid mistakes.

An AI program can go through the information of a refugee claim and show that there is no clear sense of an accurate decision. Consequently, it makes the uncertainty “obvious to decision makers so that they’re not too confident,” according to Evans Cameron.

Uncertainty and the “Wrong Mistake”

“An AI system, like the one that Goldfarb described, would end up helping refugee claimants if it was able to expose the uncertainty in the law,” said professor Evans Cameron. “Technology could help to shake [decision makers’] confidence in what they think they know.” 

This project is still early in development due to the practical complications of AI technology. “Professor Goldfarb, in his book, explains that prediction machines are all about that [AI] technology, and the pitfalls, and the drawbacks, and the potentials of that technology.” 

“One of the big dangers of AI is its lack of transparency, and the fact that there are biases that can get built into those processes,” said Evans Cameron. 

Evans Cameron also refers to Petra Molnar’s Technological Testing Grounds, which came out a week before her interview with The Eyeopener. The report is about the use of AI in refugee status decisions in an international context.

“Technologies such as automated decision-making, biometrics and unpiloted drones are increasingly controlling migration and affecting millions of people on the move,” read the report. 

With the rise of technologies, marginalized communities such as non-citizens and refugees can have “less robust human rights protections and fewer resources with which to defend those rights,” according to the report.

Evans Cameron added that the growing appearance of AI in a refugee context has the potential to be very frightening. “It is very scary, the ways that AI is being misused, to violate the rights of refugee claimants and to deny them protection and deny them status.”

While the legal context is complicated, Evans Cameron said she hopes that AI could have the potential to “do some good.” 

“As long as that was a process that ended up helping refugee claimants, then my feeling is the flaws and troubles with the AI wouldn’t end up causing the harms that it would in other kinds of contexts,” Evans Cameron said. 

She is reaching out to scholars who are interested in refugee law and AI to bring the conversations together. 

“Part of what motivated our thinking was the fact that [AI] is coming, whether we like it or not,” Evans Cameron said. “These would be the kinds of parameters that would make this technology potentially helpful.”

Leave a Reply