Defining Technology Competence in the Age of GenAI

Introduction to the New(ish) Technology Competence

Technology competence is not new, but in the age of generative artificial intelligence (GenAI), it applies in new, more complex ways. Under ABA Model Rule 1.1 and its Comment 8, competent representation includes understanding the benefits and risks associated with relevant technology. When it issued ABA Formal Opinion 512, the ABA reaffirmed that the duty of technology competence applies to GenAI, along with all other ethical duties, including:

  • Candor—Model Rules 3.1 and 3.3: Don’t submit false citations, mischaracterized law, or fake facts to court. You’re responsible for confirming that sources are real and any characterization of (or quotes from) content is accurate.

  • Confidentiality—Model Rule 1.6: Don’t enter client information or other sensitive information into unsecured systems. Know where the data goes, whether it’s retained, or who may have access. If you don’t know, you may be breaching your duty of confidentiality.

  • Independent Judgment—Model Rule 2.1: Don’t let a technology tool think for you and drive strategy. GenAI may offer suggestions that sound plausible, but it does not understand the law, your case, or your client. Maintain control.

  • Supervision—Model Rules 5.1 and 5.3: Don’t delegate a task to a tool if you can’t adequately supervise and review it. Delegation includes oversight, while assignment equates to offloading—don’t assign work to technology tools.

These obligations aren't new, with GenAI, they apply in new, more complex ways. As outlined in my March 2025 white paper Digital Due Diligence: A Practical Guide to AI and Ethics in the Legal Profession, competence is the foundation of every other ethical duty. It’s what keeps experimentation from becoming reckless. It’s what prevents us from outsourcing legal judgment to a machine. And it’s what helps us see that even though GenAI tools often sound authoritative and confident, their output is often misleading, fabricated, or incomplete. (For a discussion of technology competence within the context of Microsoft Word, see my 2020 white paper The Ethical Duty of Technology Competence: What Lawyer’s Need to Know.)

New Competence Challenges with GenAI

GenAI brings powerful new possibilities to the legal profession—and serious new risks. Earlier AI tools already reshaped law practice by enabling document review, e-discovery, legal search, and contract automation. Now GenAI “may assist lawyers in tasks such as legal research, contract review, due diligence, document review, regulatory compliance, and drafting letters, contracts, briefs, and other legal documents,” according to ABA Formal Opinion 512.

But GenAI marks a shift in functionality, flexibility, and excitement. It’s also fundamentally different from anything lawyers have used before. GenAI creates new content in response to a question written in natural language (a prompt). It doesn’t retrieve or classify. It generates. It doesn’t reason or analyze. It produces one word at a time, based on statistical patterns in past language. Its goal is fluency, not truth or logic. It may sound thoughtful, but it does not think. It may mimic legal reasoning[1], but it does not understand law. These differences matter. And misunderstanding them can lead to false statements, confidentiality breaches, and loss of client trust, as described in Opinion 512. Even though these mistakes may be surprising, lawyers must take responsibility for the tools they use—and bear any consequences. So GenAI demands a new kind of technology competence. It requires sustained awareness, technical and legal judgment, and a heavy dose of professional accountability.

How Earlier Tech Experience Can Be Misleading

To create the right mindset for developing technology competence with GenAI, you should understand how its fundamental differences require different approaches. Assuming that GenAI will work just like your old technology will lead you astray. Compare how GenAI operates differently for the same target tasks and how these changes might catch lawyers by surprise.

Traditional Search vs.
GenAI Responses
Spelling & Grammar Check vs.
GenAI Rewrites
Expectation: GenAI “searches” like a legal database
Reality: GenAI creates, it does not retrieve
Risk: hallucinations, shuffling, fabrications
Expectation: GenAI fixes mistakes but does not alter meaning
Reality: GenAI “edits” by rewriting, not just correcting
Risk: overwriting key legal language, introducing new terms of art, meaning drift
 
Document Automation vs.
GenAI Text Creation
Predictability vs.
GenAI’s Variable Outputs
Expectation: GenAI works like traditional doc auto
Reality: GenAI rewrites rather than following templates
Risk: inconsistent or missing legal language
Expectation: the same input produces the same output
Reality: GenAI’s responses can change every time
Risk: unpredictable, unrepeatable responses
 
Logical Reasoning vs. Probabilistic Generation
Expectation: GenAI follows logic rules set by experts
Reality: GenAI generates probabilistically, not hard-coded if/then rules
Risk: unexplainable and hard-to-audit outputs

 

What if you had an in-house editor at your fingertips?

WordRake enables you to create precise, highly polished writing.

It’s easy! Try it now.
businesswoman-on-pc-desktop-mockup-template

5 Parts of Competence

So what does it mean to be “technologically competent” in the age of GenAI? Technology competence is best understood as a cluster of five interdependent behaviors: developing awareness, understanding risks and benefits, keeping up with changes, building reasonable skill, and using tools competently.

1.     Developing an Awareness of Technology

Awareness is the foundation of technology competence. It means more than recognizing product names. It involves knowing what technologies exist, what they are designed to do, how they work (in broad terms), and how they affect legal practice. Lawyers do not have to track every tool or be early adopters, but they must have enough awareness to notice when new technology has become widespread or professionally relevant. Once a tool crosses a threshold of visibility and relevance, awareness becomes ethically necessary—coverage in ethics opinions, bar guidance, or mainstream legal reporting are good indicators.

The widespread interest in and reporting on GenAI offers a clear example of a technology category that demands lawyers’ awareness (though use of GenAI is not required). Tools like ChatGPT, Microsoft Copilot, Lexis+ AI, Thomson Reuters CoCounsel, and Vincent by vLex are examples of specific tools within the GenAI category that are now part of the legal conversation. Recognizing these names is a good start, but meaningful awareness goes beyond name recognition.

Lawyers must be able to distinguish among tools, understand their general function, and identify where they may raise ethical concerns. This includes knowing whether a tool is used for text generation, data retrieval, document automation, or legal research. Those categories determine how the tool works, what risks it presents, and what level of oversight is required. Lawyers may decide that the risk is not worth the reward. While choosing not to use GenAI may be prudent, choosing to remain unaware of its existence or implications is not.

Finally, lawyers must recognize the difference between consumer-grade and enterprise-grade tools. This distinction has implications in three areas: reliability, confidentiality, and rate of change. Lawyers need not know engineering details, but they must understand how these categories differ and why they matter in legal practice.

First, enterprise tools are structured to protect client confidentiality through negotiated terms, short retention windows, and no human review. Consumer-grade tools, by contrast, retain user prompts and inputs, allow for human review, and may use data for model training.

Second, even legal-specific enterprise tools that were once deterministic (gave specific, pre-defined outputs) and stable are now adding generative features that behave probabilistically and introduce variability. That means a lawyer who once relied on consistent behavior from the tool must now test and supervise new outputs with care.

Third, enterprise tools are changing in important ways—vendors are experimenting with retrieval-augmented generation (RAG), adjusting processing patterns, and tuning their systems for better performance in legal contexts. While these changes are promising, they are also uneven. What’s announced may not match what’s delivered—at least not yet. (Compare Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools (2024) by Stanford Institute for Human-Centered AI with the Vals Legal AI Report (2025).)

As tools improve and become part of day-to-day legal workflows—especially for writing, editing, or research—the ethical duty to pay attention to progress grows.

2.     Understanding the Risks and Benefits Associated with Technology

The second part of technology competence is the ability to evaluate both the risks and the benefits of a tool. Lawyers often lean too far in one direction. Some adopt tools quickly, chasing speed or novelty without caution. Others reject new tools entirely, guided by fear or uncertainty, and miss opportunities to improve their work. Competence requires balanced judgment. Lawyers must ask what the tool can do reliably, where it fails, how it handles data, and what governs its use.

This is especially important with GenAI. Used thoughtfully, GenAI can support legal practice by helping with many things from legal research to early-stage drafting to summarization. With these benefits, GenAI can free up time for strategic and analytical work. But these benefits require oversight.

The most widely recognized risk is hallucination—the generation of fluent, but fake or misleading content. This includes fabricated case names and citations, or more subtle distortions, such as real cases misquoted, legal standards blended, or qualified rules stated as definite. What makes hallucinations dangerous is how natural and plausible they sound. Several recent sanctions cases (as tracked in Artificial Intelligence: AI's Siren Song by the law library at University of Illinois Chicago) show how lawyers have relied on hallucinated citations, filed them with courts, and triggered disciplinary action—not out of bad faith, but from over-trusting the tool and failing to supervise the result. (For an overview of cases before November 2024, see Breaking Bad Briefs: A Snapshot of Lawyers, Litigants, and Experts’ Use (and Misuse) of GenAI in Court Filings by Heidi K. Brown.)

Hallucination is more likely when a legal issue is niche, under-reported, or inconsistently described in public data. When content is widely discussed by reliable writers, accurate associations are plentiful, so it is more likely to be reproduced accurately by the GenAI model. But accuracy is never guaranteed and the randomness found in GenAI is an intentional part of its design. Additionally, even if a GenAI tool could recognize a discrepancy when looking at a completed text, it cannot spot it while it generates because it only looks forward to the next word. This means that, unlike humans who can correct themselves when they realize they’ve made a mistake, GenAI does not “reason through” legal content and cannot reflect on what it has generated to evaluate itself for mistakes. So GenAI output may sound legally accurate while being doctrinally wrong, as is typically the case with hallucinations.

Confidentiality is another core risk. Many GenAI tools retain user prompts, store inputs for extended periods, or allow for human review. That conflicts with a lawyer’s duty under Rule 1.6 to protect information related to client representations. Even anonymized content may pose a risk if it has unusual fact patterns or legal scenarios that are not widely recognized because that may still lead back to the original information about your client. ABA Formal Opinion 511 warns that presenting client information as a “hypothetical” does not eliminate the risk of recognition. ABA Formal Opinion 480 further explains that even public court filings may remain confidential if they are difficult to access or identify. So the information that must be kept confidential is broader than most lawyers realize, and it defies the common assumption that anything filed with the court or available on PACER can be treated like public information with no protections.

These risks differ across platforms. Consumer tools are optimized for engagement and scale. Enterprise tools for legal practice are more likely to provide safeguards, including contractual limits on data retention and stricter handling of inputs. Understanding that distinction is not only technical—it’s ethical. Lawyers must know what kind of tool they’re using and what obligations follow.

You should also be aware of the regulatory landscape where these tools may be used. You can track global AI regulation with an interactive world map that tracks AI law, regulatory and policy developments around the world (Global AI Regulation Tracker) and you can track standing orders in US courts, of which there are now 82, or practice advisories issued in North America (RAILS Standing Order Tracker).

Lawyers aren’t expected to eliminate all risk—but they are expected to act reasonably without compromising other duties. Working faster is not a gain if it leads to factual errors or breaches confidentiality.

3.     Keeping Up with Changes to Technology Used in Legal Practice

The third part of technology competence is staying current. Understanding is not a one-time activity. It must be maintained over time, as tools add features and introduce new capabilities. For tools you don’t use directly, tangential awareness may be enough. But once you start using a tool—even occasionally—keeping up with how that technology changes becomes part of your ethical duties. You should know what the tool does, what’s new, and how those changes might affect your use of it.

This doesn’t require deep technical knowledge, but you will need consistent habits. Set aside time to do anything that works for you:

  • Browse legal tech headlines on LinkedIn
  • Skim legal tech newsletters
  • Listen to short legal tech podcast segments
  • Read product updates
  • Watch a five-minute vendor video
  • Get updates from a reliable colleague
  • Attend trainings or CLEs

That is often all it takes to maintain a baseline understanding of the tools you rely on—and to hold an informed conversation when your colleagues raise questions about them.

This ongoing engagement matters because legal tech evolves quickly, and some of the most significant recent changes directly affect how tools work. An important change is the introduction of retrieval-augmented generation (RAG). Tools using RAG try to ground GenAI outputs in curated legal content, like caselaw, statutes, or practice guides, rather than drawing from general training data. You need not know exactly what the term “RAG” stands for, but you need to understand that many tools are now trying to anchor output in real legal materials. That’s a meaningful shift. It reduces some, but not all, risks and changes what kind of review is necessary.

Another significant development is the improvement of how tools handle chunking and tokenization, processes by which legal texts are split, processed, and reassembled. Early generative tools often split content in ways that broke up logical units of legal reasoning, which led to misinterpretations or incoherent output. Newer processes are fixing this, preserving more of the structure and logic within legal documents. Again, you need not use the tech jargon, but know that these changes affect output quality, and you should adjust your review process.

These changes are especially relevant in legal-specific enterprise-grade tools. Many now incorporate GenAI components that introduce new risks. Lawyers may be used to trusting these tools without extensive oversight—but with generative features integrated, that trust must be reevaluated. Just because a tool is from a reliable provider does not mean that every part of it is just as reliable. Competence requires recognizing these shifts and adjusting your practices.

Understanding these technical differences is a matter of ethics. If you don’t know which kind of system you’re using, or if you’re unaware of recent changes to how it works, you may be exposed to hidden risks. Technology changes quickly. A tool might quietly add a generative feature or change its data handling policies—all without your realizing it. If you continue using that tool based on outdated assumptions, you’re not using it competently.

Once you adopt a tool in your legal work, you must keep up with it. That includes its features, its behavior, and its risks. Competence means knowing how the tools you use actually work, and how they’ve changed. Failing to do so may result in errors, confusion, or ethical violations. Technology evolves—lawyers must evolve with it.

To assess whether you’re keeping up, consider how you respond in conversations about technology tools. If someone mentions a tool you use and you feel lost—or can’t ask an informed question about it—you may not be doing enough to meet the standard for competence.

4.     Developing Reasonable Skill in Tools You Use

The fourth aspect of technology competence is skill—specifically, “reasonable skill” in the tools you use in your legal practice. If you’re using it, you must be able to use it intentionally and responsibly. You must know what the tool is designed to do, how it fits into your workflow, and what level of accuracy, consistency, or variability to expect from its outputs. That includes recognizing what the tool can’t do, what errors are common, and when the tool may no longer be helpful.

Skill requires knowing how to interact with the tool and what to expect from it. This includes understanding how to prompt effectively and what type of task your tool is designed to do. (See Progressive Prompting Techniques (2025) by Colin Lachance, OBA Innovator in Residence and Introducing AI Prompt Worksheets for the Legal Profession (2024) by Jennifer Wondracek, Director of the Law Library at Capital University Law School).

It also requires understanding that GenAI tools, even when used for research, will always generate or synthesize (not retrieve) responses, which should never be treated as authoritative. This mirrors a familiar issue from traditional research: citing a headnote instead of reading the case. Headnotes help orient you, but they are not citable authority. They are editorial summaries. Similarly, GenAI legal research tools can help you find direction, but they cannot replace the underlying source. Treating a summary as a substitute for a case is a failure of skill. Failure to distinguish between verified information and generated output means that you will not properly review the work.

You should understand what type of tool you are using so you know what to expect. This is particularly important with GenAI, which introduces variability by design. These tools don’t retrieve fixed answers—they generate output based on probability. That means the same prompt may yield different results. This variability is called “non-determinism.” Comparatively, deterministic tools behave the same way every time, which is what lawyers have come to expect. You need not know the technical terms, but you must understand the effect. If you treat generative tools as if they were consistent, fixed, or authoritative, you will misjudge what they are producing.

To know whether you are responsibly using a tool, ask whether you can explain why you chose a tool and what your goals are. You should be able to say something like: “I used this tool to help summarize case law. It showed me a draft summary, but I read the full opinion and confirmed the holding before I cited it.” That kind of explanation reflects a reasoned, informed use of the tool. If your answer is “I used whatever came out,” that reflects a lack of understanding—and likely a lack of supervision.

5.     Competently Using Your Chosen Tools

The final part of technology competence is practical, ethical execution. This means applying your awareness, understanding of risks and benefits, knowledge of current developments, and functional skills in ways that align with your substantive professional and ethical duties.

When you choose to use technology, use the right tools the right way for the right reasons. Your use should reflect sound judgment, protected client data, verified legal information, and supervised output. If your use of a tool undermines any of these requirements, your use is not competent, no matter how efficient or impressive the result appears.

Conclusion

Technology competence is a set of ongoing behaviors—awareness, evaluation, follow-through, skill-building, and supervision. Especially with GenAI, these behaviors must be deliberate and informed. Competent technology use is about recognizing the tool for what it is, understanding how it works, and exercising judgment at every step. But misuse is already happening and so are sanctions. The legal profession has no time left for casual experimentation or blind adoption. Learn your tools, evaluate them wisely, and maintain control.

WordRake is a traditional, trustworthy legal technology tool that offers over 50,000 editing algorithms created by linguists and subject matter experts. It is deterministic and does not include GenAI features or rely on machine learning. With WordRake, your data is always secure, and you remain in control of editing your drafts. WordRake speeds up your editing process so you can focus on the important work of evaluating and reworking your document’s reasoning and message. Take a 7-day free trial today!

A version of this paper was submitted in connection with the program Defining Technology Competency in the Age of GenAI presented by Ivy B. Grey and Kenton Brice on April 4, 2025 at the ABA TECHSHOW in Chicago, Illinois, USA.

About Ivy B. Grey

Ivy B. Grey is the Chief Strategy & Growth Officer for WordRake. Before joining the team, she practiced bankruptcy law for ten years. In 2020, Ivy was recognized as an Influential Woman in Legal Tech by ILTA. She has also been recognized as a Fastcase 50 Honoree and included in the Women of Legal Tech list by the ABA Legal Technology Resource Center. Follow Ivy on Twitter @IvyBGrey or connect with her on LinkedIn.

 

[1] When it comes to reasoning, lawyers and technologists are using the same word to describe vastly different functions. Lawyers reason through logic and analogies while applying law to fact; it is a function of the human mind with an awareness of the world. When technologists say GenAI reasons, they mean that it is sequentially cycling through technological to provide output that adequately reflects the prompt.

This isn't spelling and grammar software. This is editing software to improve brevity and simplicity.

Succinct writing for 40 cents a day.

Get Instant Editing Advice

Our Story

demo_poster_play
WordRake founder Gary Kinder has taught over 1,000 writing programs for AMLAW 100 firms, Fortune 500 companies, and government agencies. He’s also a New York Times bestselling author. As a writing expert and coach, Gary was inspired to create WordRake when he noticed a pattern in writing errors that he thought he could address with technology.

In 2012, Gary and his team of engineers created WordRake editing software to help writers produce clear, concise, and effective prose. It runs in Microsoft Word and Outlook, and its suggested changes appear in the familiar track-changes style. It saves time and gives confidence. Writing and editing has never been easier.