top of page
Search

The Ethics of Saying “No” to AI

Why Refusing AI May Soon Be a Legal Risk for Lawyers


Artificial intelligence is rapidly becoming embedded in the infrastructure of modern legal practice.

But an emerging ethical question is now confronting the profession:

Can a lawyer ethically choose not to use AI?

A recent expert opinion published by Law.com explores this issue and raises a provocative possibility: refusing to use AI in legal practice may eventually be viewed as malpractice.

The argument challenges a long-standing assumption within the profession—that technological adoption is optional rather than essential.

Increasingly, that assumption may no longer hold.


The “Old School Lawyer” Problem

Many lawyers who have built successful careers without advanced technology view the rapid rise of AI with skepticism.

Some believe traditional methods are sufficient.

Others worry about:

  • AI hallucinations and false citations

  • confidentiality risks

  • loss of professional judgment

  • overreliance on automation

These concerns are legitimate and widely discussed across the profession.

However, the ethical question is not simply whether AI carries risk.

The more important question is:

Does refusing to use AI create even greater risk for clients?


When Not Using AI Becomes Malpractice

During a discussion at the New York State Bar Association Annual Meeting, a striking observation was raised by U.S. District Judge Jesse Furman of the Southern District of New York.

He noted that while lawyers worry about malpractice from using AI incorrectly, the opposite risk may soon become more significant.

In other words:

At some point, failing to incorporate AI into legal practice could itself be considered malpractice.

This perspective reflects a fundamental principle of professional responsibility:

Lawyers must exercise the level of skill commonly used by competent practitioners.

As technology becomes a standard professional tool, the baseline standard of competence evolves with it.


The Competence Rule and Technology

Under Rule 1.1 of the New York Rules of Professional Conduct, lawyers must provide competent representation.

While the rule itself does not explicitly reference technology, the commentary clarifies that attorneys have a duty to:

“remain current in law and practice, including technology.”

This means that competence is no longer limited to legal doctrine alone.

It increasingly includes understanding tools that influence how law is practiced, including:

  • AI-assisted legal research

  • predictive coding in discovery

  • digital evidence analysis

  • document review automation

  • AI-generated summaries and analytics

Failing to understand these tools could mean failing to meet the profession’s evolving standard of care.


The Litigation Risks of Ignoring AI

The article highlights several practical scenarios where failing to use AI could materially harm a client.

Missing Critical Evidence in Discovery

Modern litigation often involves massive datasets containing emails, documents, and digital communications.

AI-driven tools such as predictive coding can analyze these datasets rapidly.

A lawyer reviewing documents manually may overlook key evidence buried in thousands of files.

If that evidence could have been found using widely available technology, the lawyer’s failure to use it could raise malpractice concerns.

Failing to Challenge AI-Generated Evidence

In one example discussed in the article, a criminal defendant was misidentified by facial recognition software.

His legal team successfully challenged the evidence by reverse-engineering the AI system used to identify him.

If lawyers fail to investigate the reliability of digital evidence generated by algorithms, they risk overlooking critical flaws in the prosecution’s case.

Missing Hallucinations in Opposing Filings

Courts have already seen cases where lawyers submitted briefs containing fabricated citations generated by AI systems.

But the article raises another important question:

If opposing counsel submits AI-generated arguments containing hallucinated citations, should lawyers use AI tools to detect them?

Failing to do so could mean missing errors that directly affect the outcome of a case.



Discovery, Privilege, and AI

Another growing concern involves privilege and confidentiality.

Many open AI systems process user inputs through third-party infrastructure.

If lawyers input confidential information without proper safeguards, they risk:

  • waiving attorney-client privilege

  • exposing work-product material

  • violating confidentiality rules

Bar associations are already issuing guidance on how lawyers should navigate these risks.

The challenge is therefore not whether AI should be used, but how it should be used responsibly.

The SavvyLex Perspective

At SavvyLex, we view the debate over AI in legal practice through a governance-first lens.

The future of legal AI is not about replacing lawyers.

It is about augmenting professional judgment with responsible technology.

Modern legal competence will increasingly require understanding how to:

  • deploy AI tools responsibly

  • audit AI outputs for accuracy

  • manage confidentiality and privilege risks

  • detect algorithmic bias and hallucinations

  • supervise AI-assisted workflows

This is precisely why the SavvyLex ecosystem—including Vera, LexAgents, and SkillBuilder—is designed around AI literacy, governance, and professional oversight.

Legal professionals should not simply use AI.

They should understand how it works, where it fails, and how to supervise it responsibly.


The Future of Ethical Legal Practice

The profession has faced similar transitions before.

Legal research moved from physical libraries to digital databases.

E-discovery transformed document review.

Now, AI is reshaping the way legal analysis is performed.

History suggests the same pattern will repeat:

Technologies that begin as optional tools eventually become baseline professional infrastructure.

The ethical challenge for lawyers is therefore not whether to adopt AI.

It is how to adopt it competently, responsibly, and ethically.

Key Takeaway

The legal profession is entering a new phase where technological literacy is part of professional competence.

Lawyers who ignore AI risk falling behind.

And in the near future, failing to use available tools that could materially benefit a client may no longer be seen as a matter of personal preference.

It may be seen as a failure of professional duty.

Source: Joel Cohen & Douglas Nadjari, The Ethics of Saying “No” to AI, Law.com (March 6, 2026).


 
 
 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page