Skip to Content

New Tools, Old Duties: Artificial Intelligence and the Practice of Law

The legal profession is built on trust. How does AI impact lawyer ethics?

Close up of the face of the goddess of justice with binary code in the background.

Question: How does artificial intelligence challenge the ethics and duties of lawyers?

The rise of artificial intelligence is reshaping how lawyers practice — and challenging the very foundation of legal ethics and professional responsibility. As AI tools become integrated into everything from document review to client communications, they raise urgent and complex questions: How do lawyers safeguard confidentiality when using AI? Who is accountable for errors made by algorithms? And what does competence mean when part of the work is delegated to a machine?

At its core, the legal profession is grounded in trust, independent judgment, and personal accountability. AI, powerful though it is, tests each of these pillars. It introduces new risks that lawyers must recognize and manage. Duties that once seemed straightforward, such as protecting client information and providing competent representation, now demand a deeper understanding of the technologies being deployed.

But this moment also presents an opportunity. Lawyers who take the time to understand AI’s capabilities and limits — and apply them with care — will be better equipped to serve their clients, manage risk, and uphold their professional duties. The tools may change, but the commitment to thoughtful, responsible representation remains as essential as ever.

An AI wrote everything you just read. A lawyer wrote what comes next, but the gap is closing fast. In a profession built on trust, the next question is: Does it matter?

AI technologies directly test the core professional duties that lawyers owe to clients, to the courts, and to the public interest. The ethical framework that has long governed legal practice — competence, confidentiality, independence, loyalty, and integrity — is strained in new and often unpredictable ways by AI.

First, AI demands a new dimension of technological competence. Whether using AI for legal research, contract drafting, or preliminary case assessments, lawyers must understand, at least at a high level, how these systems operate, where they are reliable, and where they are prone to failure. Without such knowledge, there is a risk of relying on inaccurate or biased outputs.

Second, the duty of confidentiality faces significant new risks. Many AI systems, particularly those based on large language models, require substantial amounts of data to function effectively. Uploading client documents into third-party AI platforms, often hosted on servers outside Canadian jurisdiction, creates vulnerabilities that few traditional risk management frameworks anticipated. Lawyers have a duty to make sure that AI tools used in practice are subject to evaluation of privacy protections and data handling practices, and that clear contractual assurances are obtained regarding data use, retention, and breach notification.

Third, AI challenges the duty of independent judgment. There is a temptation to over-rely on AI-generated outputs, especially when they appear detailed and polished. However, no AI system can account for the full nuance of a client’s circumstances, nor can it exercise the kind of professional judgment that considers legal, ethical, and human factors in context. Lawyers must remember that AI is a tool and not a substitute for independent, reasoned advice. AI systems trained on historical legal data may even reflect and reinforce systemic biases, for example, in criminal sentencing predictions or family law outcomes.

At the governance level, these challenges have broader implications. Law firms and organizations must update their risk management strategies, procurement processes, and professional development programs to reflect the realities of AI-enhanced practices and businesses. Lawyers, particularly those in leadership roles, have an opportunity to shape how AI is integrated into organizational culture, balancing innovation with commitment to ethical practice.

A key pillar of this governance is data stewardship. Organizations must implement clear policies regarding how information is collected, used, stored, shared within AI systems, and ultimately destroyed. Data governance frameworks should address questions of consent, security, cross-border data flows, and the ongoing monitoring of third-party AI providers. In parallel, organizations must also create accountability mechanisms around AI use, including audit processes, escalation pathways for AI-related concerns, and regular training programs focused on ethical technology adoption.

Governance policies should reinforce the principle that human judgment remains paramount, even in an AI-enabled environment. By embedding ethical considerations and strong data governance into organizational strategy, lawyers can position their organizations and the broader legal profession to meet the challenges of AI with integrity, resilience, and public trust.

An AI started this article, but a human lawyer — with all the judgment, context, and inevitable typos that entails — finished it. The tools may evolve, but responsibility remains firmly, and necessarily, human.