UNB Law

Faculty In Focus: Norman Siebrasse is using AI to better understand decisions of Canada’s top court

Author: Ed Bowes

Posted on Feb 25, 2026

Category: Students , Alumni , Faculty


The Supreme Court of Canada has written thousands of decisions over the past half-century. Embedded in that jurisprudence is a consequential record of how the law itself is constructed: is the Court telling us exactly what to do through clear, predictable rules, or is it leaving room for judgment, discretion, and context through open-ended standards? Understanding how the Court strikes that balance is not an abstract question. It shapes how disputes are resolved every day and carries real consequences for access to justice.

That deceptively difficult distinction lies at the centre of Professor Norman Siebrasse’s latest ambitious and innovative research project. “The hypothesis is that, over the past few decades, the Supreme Court has tended more toward standards rather than rules,” explains Prof. Siebrasse. “For example, a standard might be ‘all persons in this situation should act in good faith,’ while a rule would be something like ‘the first to register wins.’”

Turning that insight into evidence, however, presents a different challenge. For Prof. Siebrasse, the difficulty was not the theory, but the scale. How do you compare a tax case to a tort case, or a patent decision to a Charter ruling, and say something meaningful about the Court’s overall approach to lawmaking? And how do you do it across thousands of cases, without reducing the analysis to anecdote or intuition?

The answer, it turns out, is to use artificial intelligence.

Scaling legal research through AI

By pairing a carefully constructed framework with large-scale AI analysis, Prof. Siebrasse and his co-authors are doing something that would have been practically impossible even a few years ago: systematically mapping how the Supreme Court’s reasoning style has shifted over time.

To make that comparison possible, the research team developed a detailed, three-page rubric explaining what distinguishes rules from standards and how to identify the Supreme Court’s own contribution to each case. That rubric is then given to an AI system, along with pairs of Supreme Court decisions. The AI is then tasked with applying this rubric to determine which case is more rule-like and which is more standard-like. The AI responds on a five-point scale, ranging from strongly rule-like to strongly standard-like, with a neutral midpoint. That process is then repeated…over and over again.

After curating the dataset to remove very short decisions and brief oral affirmations, the team is left with approximately 3,300 Supreme Court cases spanning 50 years. Those cases are compared in tens of thousands of side-by-side evaluations. The results allow the researchers to rank decisions along a rule–standard spectrum and trace how the Court’s approach has evolved over time.

In theory, a team of human research assistants could have done this work. In practice, it would have been impossible.

“We’re using AI just like a research assistant,” says Prof. Siebrasse. “But we’re talking about roughly 47,000 comparisons. To ask a research assistant to do that many comparisons is just not feasible.”

Rigorous testing to ensure accuracy

Consistency is another key consideration. Human researchers get tired. They bring unconscious bias. Their application of a rubric can subtly shift over time. AI, by contrast, can be tuned for consistency—and tested repeatedly to ensure it is applying the rubric as intended.

That testing phase was critical. Before running the full dataset, Prof. Siebrasse and his co-authors spent extensive time validating the rubric itself. They fed the AI pairs of cases from areas where they were domain experts—such as patent law—and checked whether the AI’s answers matched their own strong intuitions. They also refined the rubric to ensure the AI focused on what mattered most: not whether the statutory framework was rule-like, but whether the Supreme Court’s reasoning was.

“That was one of the things that came out during testing—making sure the rubric was focused on the Supreme Court’s contribution,” says Prof. Siebrasse. “Let’s say you’ve got a tax case, and the tax code in that area is very rule-like, but there’s a gap in the tax code and the SCC fills it in. If the Court fills that gap with a standard, then their contribution has been standard-like. But if you look at the test as a whole, it might still be very rule-like. That would confound our results. There were a lot of little issues like that. Testing helped us catch them before we generated data that wasn’t meaningful.”

AI played a second, less visible but equally important role in the project: writing the code that made the analysis possible. Running tens of thousands of comparisons requires automation, batch processing, and careful parsing of outputs. That work is done through application programming interfaces (APIs) and Python coding.

“It required a lot of coding,” explains Prof. Siebrasse. “I used to code as a profession 40 years ago after my engineering degree, but I don’t know Python. All the code was written with the help of AI. So, there are two distinct roles for the AI in the project: one is to do the comparison—that’s the core—and the other is to enable large-scale batch processing, which requires Python code, and we use AI to help write that code.”

An open-sourced approach to research

The choice of AI platform was pragmatic. The team tested several systems—Claude, GPT, and Gemini—before settling on Gemini, largely because it offered free API credits that made the extensive testing and debugging phase financially feasible. The important point, Prof. Siebrasse emphasizes, is not which tool was used, but how carefully it was used.

That care reflects a deep awareness of AI’s limitations. Chief among them is transparency. As Prof. Siebrasse puts it, AI systems are, in many respects, “black boxes.” Researchers can provide inputs and receive outputs, but they cannot fully see how the machine reasons its way from one to the other.

“But is that really so different from a human research assistant?” Prof. Siebrasse asks. “You can ask them what they were thinking, and they’ll tell you something that sounds reasonable. That doesn’t mean you fully understand what’s going on.”

The solution, in both cases, is rigorous testing, transparency, and replication. When the project is published, the team plans to release not only a traditional law review article, but also the full rubric, the complete source code, and extensive documentation. Anyone who wants to replicate—or challenge—the findings will be able to do so.

That open-source approach is part of what makes the project distinctive. It is not just an answer to a substantive question about rules and standards, but a blueprint for how AI can be responsibly integrated into legal scholarship.

The implications extend well beyond this single study. Prof. Siebrasse sees AI opening the door to a “big data” era in empirical legal research, where entire bodies of case law can be analyzed systematically rather than selectively. At the same time, he is clear-eyed about what AI cannot yet do. On the doctrinal side of legal scholarship, human judgment remains central. AI may become a more powerful search tool, but it is not yet capable of replacing close legal reasoning.

Prof. Siebrasse does not see artificial intelligence as the death of the research assistant. Instead, he views it as a tool that expands the scope of legal scholarship, providing new ways to see and analyze patterns in the law, and enabling scholars to ask and answer questions that once seemed simply too large or complex to tackle.

Finding a clear shift in reasoning

The findings confirm Prof. Siebrasse's hypothesis—and the shift is more dramatic than expected. Over the past fifty years, the Supreme Court has moved substantially toward standard-like reasoning, with the steepest decline beginning in 1982, precisely when the Charter of Rights and Freedoms came into force. That timing is not coincidental. The Charter required courts to balance competing rights and apply open-ended tests like "reasonable limits" in a "free and democratic society"—inherently standard-like reasoning. What surprised the researchers was how that approach spread beyond Charter cases into criminal law, administrative law, and other areas.

"It's like a contagion effect," says Prof. Siebrasse. "The Court adopted a more discretionary style for Charter cases, and that style then influenced how they reasoned in non-Charter cases."

The research also found that changing case composition—more Charter cases, fewer tax cases—explains only a modest portion of the shift. The rest reflects a genuine change in judicial attitude. Even more striking, Supreme Court decisions have more than doubled in length over the study period, and longer decisions are systematically more standard-like. Whether that length reflects greater care or simply less editing, the effect is real: as decisions grow, rules become harder to extract.

Why Rules and Standards Matter: Shaping Access to Justice

The distinction between rules and standards is not just theoretical; it carries real consequences for lawyers, litigants, and access to justice. Rules create certainty, making disputes easier—and cheaper—to resolve. Standards, while potentially more equitable, introduce uncertainty that can drive up costs and push cases toward trial.

“Most cases settle, because it’s very expensive to go to court,” says Prof. Siebrasse. “When you want to settle, having certainty in the law is very convenient. Rules allow a more efficient legal system, and that’s not just hard-nosed economic efficiency—it’s access to justice. Everyone is concerned about access to justice, and access to justice is hard because it’s expensive. To the extent we can make justice cheaper, more people have access to it.”

In this sense, the project is about more than measuring the Supreme Court’s past.

It is about understanding how judicial reasoning choices ripple outward, shaping the justice system and the experience of justice itself. By combining rigorous scholarship with innovative tools, research like this helps illuminate the forces that guide the law—and, ultimately, who benefits from it.