You almost certainly are generally familiar with the revered attorney-client privilege as a concept and an ongoing practice in the law.
Movies and TV shows seem to relish plots involving attorney-client privilege dilemmas. An attorney in a tense courtroom drama is often shown as being conflicted about attempting to abide by the attorney-client provisions. All sorts of personal angst can underlie these conundrums. We are on the edge of our seats as the conflicted lawyer mightily tries to stay within the attorney-client privilege legal boundaries.
In the real world, there might not be quite as much of a daily outsized drama over the attorney-client privilege as seen in films, and yet this is nonetheless a vital tenant of our judicial system and undoubtedly is a highly weighty topic for us all.
Here’s something that you might not have been considering about the vaunted attorney-client privilege.
The latest advances in Artificial Intelligence (AI) consisting of generative AI such as ChatGPT and GPT-4 might be an insidious underminer of the longstanding attorney-client privilege.
To clarify, it isn’t somehow that AI has gone sentient and ergo is able to potentially intercede in the privileged communications between an attorney and their client. We don’t have any AI that is sentient, which I realize might be surprising to some of you since there seem to be blaring headlines in the media that suggest otherwise. Read my lips, there isn’t any sentient AI today. Period, full stop.
One supposes that someday if AI does reach sentience, maybe at that juncture we might need to reconsider a lot of keystone assumptions about our society and the world at large. A tiny and intriguing aspect would be whether having a sentient AI involved in any attorney-client privileged communications might constitute a violation of the privilege as to a third-party intervening. Maybe we will eventually decide to anoint AI with a semblance of legal personhood, as I discuss at the link here, but that’s not in the cards today.
All right, so if AI is not sentient, you might be wondering how could AI such as generative AI confound the attorney-client privilege?
I’m glad that you asked that question.
The answer ties back to one of my prior column postings that discussed at length the concerns that generative AI can entail potential privacy intrusions and a lack of data confidentiality, see the link here. In today’s column, I will examine an especially noteworthy specific use case associated with the possible privacy intrusions and the possible leaks of confidential data that can occur when using generative AI apps including ChatGPT.
The specific use case involves the nature of the attorney-client privilege.
I will first cover some of the fundamentals of this exalted privilege. Next, I will make sure you are up-to-speed about generative AI. We can then combine the two topics and showcase how the chances of generative AI inadvertently becoming a thorn in the side of the attorney-client privilege can arise. The good news is that if lawyers are mindful and careful, and if clients making use of lawyers are also mindful and careful about their actions while embroiled in legal cases, the risks associated with disrupting the privilege are immensely lessened and can be driven down to zero as pertains to this particular possible breech (of course, lots of other ways of undermining the privilege still remain intact and loom relentlessly).
The crux to this is that all parties need to know what they are doing, what is not to be done, and what they are okay to do.
We’ll cover that.
Fundamentals Of The Attorney-Client Privilege
Let’s begin with some definitional facets.
According to the Cornell University Law School and its famed Legal Information Institute (LII) database, the attorney-client privilege can be defined in this manner:
- “Attorney-client privilege refers to a legal privilege that works to keep confidential communications between an attorney and their client private. Communications made to and by a lawyer in the presence of a third party may not be entitled to this privilege on grounds that they are not confidential. The privilege can be affirmatively raised in the face of a legal demand for the communications, such as a discovery request or a demand that the lawyer testify under oath. A client, but not a lawyer, who wishes not to raise attorney-client privilege as a defense is free to do so, thereby waiving the privilege. This privilege exists only when there is an attorney-client relationship” (LII, posting by the Wex Definitions Team).
Take a moment to unpack this definition.
One party of this mechanism is the attorney, while the other party is the associated client. When a client and an attorney are engaged in legal matters, a crucial belief is that the two need to be able to openly carry on communications about the legal case at hand. Imagine if we didn’t do things that way. Suppose that an attorney could go around town blabbing endlessly about the seemingly private matters of the client. That would not be a pretty picture of an effective judicial system.
Our laws try to make abundantly clear that there needs to be a “full and frank” relationship between an attorney and their client. The client has to feel confident that what they discuss or convey to and with their attorney will be held in confidence. This principle is near and dear to the heart of our legal approach (please realize that not all countries have this same standard).
In a pertinent U.S. Supreme Court case that was handed down in 1981, this excerpt expresses the vital nature of the attorney-client privilege and emphases the full and frank precept:
- “The attorney-client privilege is the oldest of the privileges for confidential communications known to the common law. Its purpose is to encourage full and frank communication between attorneys and their clients and thereby promote broader public interests in the observance of law and administration of justice. The privilege recognizes that sound legal advice or advocacy serves public ends and that such advice or advocacy depends upon the lawyer’s being fully informed by the client” (Upjohn Co. v. United States, 449 U.S. 383, 1981).
When referring to confidential communications, it is worthwhile to keep in mind that this consists of all manner of communication modes. A client might speak directly face-to-face with their attorney. This might also be done remotely via Zoom or similar. Emails might be sent back and forth between a client and their attorney. Believe it or not, physical pieces of printed papers and even the age-old fax machine might be utilized.
As an aside, there is a lot of talk these days about brain-machine interfaces (BMI). This emerging type of high-tech is intended to “read minds” but we are a long way from being able to do so in any substantive way, see my coverage at the link here. Envision that we are able to attain a viable device that can essentially read your thoughts and convey them to others. If you want to tell your attorney something, you might not need to speak it aloud, nor write it down, and instead do a brain-to-brain conveyance via both of you using a respective BMI. I dare suggest that even in that circumstance, we will still retain the attorney-client privilege and simply see this as yet another means of confidential communication.
Hang onto your hat for that day to arrive.
Now then, confidential communication via whatever modes are utilized is considered confidential as long as there isn’t a break in that private communication. You might have observed that the LLI definition given above noted that “communications made to and by a lawyer in the presence of a third party may not be entitled to this privilege on grounds that they are not confidential.” The essence is that if a third party is privy to the communication, the odds are that the communication no longer enjoys the stated privilege.
In short, we have these three major components:
There are numerous caveats about all of this.
For example, the attorney and the client have to ostensibly form a legal relationship else the privilege is not necessarily underway (this is considered the phrased attorney-client relationship). The attorney is principally bound to maintain the privilege, while the client can opt to break the privilege if they choose to do so (the client can waive the privilege). You probably aren’t surprised that an aspect of the law is likely to have a plethora of twists and turns. That’s what our laws seem to imbue.
In the United States, the American Bar Association (ABA) provides key guidance to attorneys about the attorney-client privilege. Various rules exist. Attorneys are expected to know the rules and adhere to them. Furthermore, the rules are periodically updated, and also new rules are added. Lawyers cannot just learn the rules at one point in time and remain stuck in time. They are required to keep up with the stipulated rules.
Per the American Bar Association (ABA), there is Rule 1.6 Confidentiality Information – Communication that addresses salient key points regarding the attorney-client privilege, such as this rule:
- “A fundamental principle in the client-lawyer relationship is that, in the absence of the client’s informed consent, the lawyer must not reveal information relating to the representation. See Rule 1.0(e) for the definition of informed consent. This contributes to the trust that is the hallmark of the client-lawyer relationship. The client is thereby encouraged to seek legal assistance and to communicate fully and frankly with the lawyer even as to embarrassing or legally damaging subject matter. The lawyer needs this information to represent the client effectively and, if necessary, to advise the client to refrain from wrongful conduct. Almost without exception, clients come to lawyers in order to determine their rights and what is, in the complex of laws and regulations, deemed to be legal and correct. Based upon experience, lawyers know that almost all clients follow the advice given, and the law is upheld” (ABA Rule 1.6, Subsection 2 excerpt).
The rule points out that clients might not seek out attorneys if the privilege did not exist. You would naturally be worried that whatever you told the attorney could be held against you. The attorney might opt to tattle on you. The attorney might be dragged into court to testify against you. By and large, the aim is to try and encourage people to accept and follow the rule of law. We might have societal chaos were it not for people overwhelmingly willingly abiding by the rule of law.
In some of those outstretched movie plots, the attorney-client privilege is incorrectly portrayed as being absolute. It is not.
Consider this additional component of the ABA Rule 1.6:
- “Paragraph (b)(2) is a limited exception to the rule of confidentiality that permits the lawyer to reveal information to the extent necessary to enable affected persons or appropriate authorities to prevent the client from committing a crime or fraud, as defined in Rule 1.0(d), that is reasonably certain to result in substantial injury to the financial or property interests of another and in furtherance of which the client has used or is using the lawyer’s services. Such a serious abuse of the client-lawyer relationship by the client forfeits the protection of this Rule. The client can, of course, prevent such disclosure by refraining from the wrongful conduct” (ABA Rule 1.6, Subsection 7 excerpt).”
Take a close look at that excerpted rule. If a client communicates to their attorney that they intend to commit a crime or fraud, this indication by the client can potentially break the limits of the privilege. You might be puzzled why this would be a breakage. Well, we all would be quite steamed to find out after the fact that an attorney had been apprised by their client that they were going to say murder someone, and they then did so, and the attorney did nothing whatsoever about it.
The gist is that there are tensions between adhering to the privilege versus breaking the privilege. One would seek a balance of assuring that the privilege was relatively intact. Too many ways of breaking it would seem to weaken the potency of the privilege. On the other hand, various circumstances might outweigh the privilege with respect to the greater good of society all told.
I believe that lays a sufficient foundation on the topic as is needed for this discussion. You are certainly encouraged to learn more about the famed attorney-client privilege if that seems of interest to you.
Generative AI And ChatGPT
I’m betting that you already have heard about ChatGPT, a generative AI app made by OpenAI. Thus, I’ll make this overview of generative AI and ChatGPT a quick one. To get more details about how ChatGPT works, see my explanation at the link here. If you are interested in the successor to ChatGPT, coined GPT-4, see the discussion at the link here.
ChatGPT is a headline-grabber that is widely known for being able to produce fluent essays and carry on interactive dialogues, almost as though being undertaken by human hands. A person enters a written prompt, ChatGPT responds with a few sentences or an entire essay, and the resulting encounter seems eerily as though another person is chatting with you rather than an AI application.
I’ll repeat what I said earlier about the overall capabilities of today’s AI. ChatGPT is not sentient. We don’t have sentient AI. Do not fall for those zany headlines and social media rantings.
Generative AI is based on a complex computational algorithm that has been data trained on text from the Internet and admittedly can do some quite impressive pattern-matching to be able to perform a mathematical mimicry of human wording and natural language.
There are four primary modes of being able to access or utilize ChatGPT:
- 1) Directly. Direct use of ChatGPT by logging in and using the AI app on the web or soon on your smartphone as an app
- 2) Indirectly. Indirect use of kind-of ChatGPT (actually, GPT-4) as embedded in Microsoft Bing search engine
- 3) App-to-ChatGPT. Use of some other application that connects to ChatGPT via the API (application programming interface)
- 4) ChatGPT-to-App. Now the latest or newest added use entails accessing other applications from within ChatGPT via plugins
The capability of being able to develop your own app and connect it to ChatGPT is quite significant. On top of that capability comes the addition of being able to craft plugins for ChatGPT. The use of plugins means that when people are using ChatGPT, they can potentially invoke your app easily and seamlessly. See my discussion about the API at the link here. For my analysis of how the plugins will be a game changer, see the link here.
I and others are saying that this will give rise to ChatGPT as a platform.
All manner of new apps and existing apps are going to hurriedly connect with ChatGPT. Doing so provides the interactive conversational functionality of ChatGPT. The users of existing apps will be impressed with the added facility. Furthermore, if there is also an approved plugin, this means that anyone using ChatGPT can now make use of that particular app while inside ChatGPT.
Not everyone is over the moon about the increasing use of generative AI such as ChatGPT.
You might vaguely know that generative AI such as ChatGPT has many flaws. Besides the possibility of producing offensively worded essays and interactions, there are many additional and extremely disconcerting issues about today’s generative AI.
Four concerns about generative AI that I have extensively covered include:
- 1) Errors. Generates wording and essays that have errors of fact or miscalculations, etc.
- 2) Falsehoods. Generates false assertions and other insidious falsehoods.
- 3) Biases. Generates wording and essays that contain biases of nearly any and all kinds.
- 4) AI Hallucinations. Generates what appears to be factual but is made-up and not at all factually based (I don’t like the term “AI hallucinations” due to the anthropomorphizing of AI, but it seems to be a catchphrase that has regrettably gained acceptance, see my discussion at the link here).
Lest you shrug off those pitfalls, realize that people using generative AI are bound to fall into the trap of accepting the outputted essays as truthful and factual. Doing so is easy-peasy. You see lots of essays and interactions that seem on par with human levels of fluency and confidence. You get lulled into assuming that everything uttered is of the utmost correctness.
Even the most ardent supporters of generative AI would acknowledge that we have severe problems associated with the generation of errors, falsehoods, biases, and AI hallucinations. No reasonable AI researcher or AI developer could disagree with that contention.
Into all of this comes a slew of AI Ethics and AI Law considerations.
There are ongoing efforts to imbue Ethical AI principles into the development and fielding of AI apps. A growing contingent of concerned and erstwhile AI ethicists are trying to ensure that efforts to devise and adopt AI takes into account a view of doing AI For Good and averting AI For Bad. Likewise, there are proposed new AI laws that are being bandied around as potential solutions to keep AI endeavors from going amok on human rights and the like. For my ongoing and extensive coverage of AI Ethics and AI Law, see the link here and the link here, just to name a few.
The development and promulgation of Ethical AI precepts are being pursued to hopefully prevent society from falling into a myriad of AI-inducing traps. For my coverage of the UN AI Ethics principles as devised and supported by nearly 200 countries via the efforts of UNESCO, see the link here. In a similar vein, new AI laws are being explored to try and keep AI on an even keel. One of the latest takes consists of a set of proposed AI Bill of Rights that the U.S. White House recently released to identify human rights in an age of AI, see the link here. It takes a village to keep AI and AI developers on a rightful path and deter the purposeful or accidental underhanded efforts that might undercut society.
I’ll be interweaving AI Ethics and AI Law related considerations into this discussion.
Generative AI Has Privacy And Data Confidentiality Issues
People are often surprised when I tell them that the data entered into an AI app such as ChatGPT is potentially not at all entirely private to you and you alone. It could be that your data is going to be utilized by the AI maker to presumably seek to improve their AI services or might be used by them and/or their allied partners for a variety of purposes.
Here is what I proffered in my column posting about ChatGPT privacy considerations and data confidentiality concerns (see the link here):
- Be very, very, very careful about what data or information you opt to put into your prompts when using generative AI, and similarly be extremely careful and anticipate what kinds of outputted essays you might get since the outputs can also be absorbed too.
I’ll add a twist to the aforementioned cautionary alert.
Whereas you might directly be entering your text into a ChatGPT prompt, realize that you could indirectly be doing so too. Recall that I mentioned the four ways of accessing ChatGPT. Let’s revisit those four in light of privacy and data confidentiality issues:
- 1) Directly. You directly enter a prompt that contains your private or confidential info, which then goes into ChatGPT
- 2) Indirectly. You indirectly use of kind-of ChatGPT (actually, GPT-4) embedded in the Microsoft Bing search engine and enter a prompt containing your private or confidential info, which then goes into the generative AI app
- 3) App-to-ChatGPT. You use some other application that connects to ChatGPT via the API (application programming interface), and your private data or confidential info gets fed into ChatGPT
- 4) ChatGPT-to-App. You use a ChatGPT plugin, which then conveys your private or confidential info further into ChatGPT and possibly elsewhere too
The threat surface or vulnerability range has been demonstrably expanded due to the advent of API usage and plugins.
When you log onto ChatGPT, there are a series of cautions and informational comments displayed.
Here they are:
- “May occasionally generate incorrect information.”
- “May occasionally produce harmful instructions or biased content.”
- “Trained to decline inappropriate requests.”
- “Our goal is to get external feedback in order to improve our systems and make them safer.”
- “While we have safeguards in place, the system may occasionally generate incorrect or misleading information and produce offensive or biased content. It is not intended to give advice.”
- “Conversations may be reviewed by our AI trainers to improve our systems.”
- “Please don’t share any sensitive information in your conversations.”
- “This system is optimized for dialogue. Let us know if a particular response was good or unhelpful.”
- “Limited knowledge of world and events after 2021.”
Take special note of the sixth bullet point that says your ChatGPT conversations might be reviewed by the vendor’s AI trainers. That is what we’ll call a type of third party in the context of this discussion about the attorney-client privilege. Also, notice that the seventh bullet point cautions you to not share any sensitive information when undertaking conversations with ChatGPT.
The sad thing about these warnings is that it seems as though many people breeze right past the warnings. Perhaps we have become numb to warnings on all manner of products and services. We just mindlessly go past the alerts and notifications.
Some even claim that the AI app ought to repeatedly warn you. Each time that you enter a prompt, the software should pop up a warning and ask you whether you want to hit the return. Over and over again. Though this might seem like a helpful precaution, admittedly it would irritate the heck out of users. A thorny tradeoff is involved. I’m sure that someone will eventually sue that they were wronged by the generative AI for usurping private data and part of the legal argument will be that the warnings were unclear, insufficient, etc. Whether they can prevail in court is as yet decided.
I would urge anyone that is going to use generative AI such as ChatGPT to closely examine the licensing aspects. Most people do not bother to do so.
Since I herein am discussing the attorney-client privilege, one would certainly hope that any lawyer using generative AI such as ChatGPT would take a hard look at the licensing particulars.
Regrettably, some do not do so. They are so excited to try generative AI that they seem to momentarily lose their heads. They leap in and eagerly try out ChatGPT. Not a wise lawyering thing to do. At first, they just play around. Then, they get kind of hooked on the AI. This then can take them down a dour slippery slope whereby they begin to enter client info into the generative AI. For my exploration of how lawyers and law practices can sensibly and productively make use of generative AI, see the link here.
We can briefly take a glimpse at the ChatGPT licensing (the licensing is noted on the OpenAI website, and subject to change, so make sure to check whatever is the latest posting).
First, here’s a definition of what they consider “content” associated with the use of ChatGPT:
- “Your Content. You may provide input to the Services (‘Input’), and receive the output generated and returned by the Services based on the Input (‘Output’). Input and Output are collectively “Content.” As between the parties and to the extent permitted by applicable law, you own all Input, and subject to your compliance with these Terms, OpenAI hereby assigns to you all its right, title and interest in and to Output. OpenAI may use Content as necessary to provide and maintain the Services, comply with applicable law, and enforce our policies. You are responsible for Content, including for ensuring that it does not violate any applicable law or these Terms.”
If you carefully examine that definition, you’ll notice that OpenAI declares that it can use the content as they deem necessary to maintain its services, including complying with applicable laws and enforcing its policies. This is a handy catchall for them. In an upcoming one of my columns, I’ll be discussing a different but related topic, specifically about the Intellectual Property (IP) rights that you have regarding the entered text prompts and outputted essays (I point this out herein since the definition of the Content bears on that topic).
In a further portion of the terms, labeled as section c, they mention this facet: “One of the main benefits of machine learning models is that they can be improved over time. To help OpenAI provide and maintain the Services, you agree and instruct that we may use Content to develop and improve the Services.” This is akin to the earlier discussed one-line caution that appears when you log into ChatGPT.
A separate document that is linked to this provides some additional aspects on these weighty matters:
- “As part of this continuous improvement, when you use OpenAI models via our API, we may use the data you provide us to improve our models. Not only does this help our models become more accurate and better at solving your specific problem, it also helps improve their general capabilities and safety. We know that data privacy and security are critical for our customers. We take great care to use appropriate technical and process controls to secure your data. We remove any personally identifiable information from data we intend to use to improve model performance. We also only use a small sampling of data per customer for our efforts to improve model performance. For example, for one task, the maximum number of API requests that we sample per customer is capped at 200 every 6 months” (excerpted from the document entitled “How your data is used to improve model performance”).
Note that the stipulation indicates that the provision applies to the use of the API as a means of connecting to and using the OpenAI models all told. It is somewhat murky as to whether this equally applies to end users that are directly using ChatGPT.
In yet a different document, one that contains their list of various FAQs, they provide a series of questions and answers, two of which seem especially pertinent to this discussion:
- “(5) Who can view my conversations? As part of our commitment to safe and responsible AI, we review conversations to improve our systems and to ensure the content complies with our policies and safety requirements.”
- “(8) Can you delete specific prompts? No, we are not able to delete specific prompts from your history. Please don’t share any sensitive information in your conversations.”
I think that pretty much provides a tour of some considerations underlying how your data might be used. As I mentioned at the outset, I am not going to laboriously step through all of the licensing stipulations here.
Hopefully, this gets you into a frame of mind on these matters and will remain on top of your mind.
Returning to the earlier identified parties, let’s go ahead and now annotate how each pertains to the attorney-client privilege and the use of generative AI such as ChatGPT:
- Attorney – either unintentionally or intentionally enters client confidential information into generative AI and thus puts at risk the attorney-client privilege due to the potential exposures involved.
- Client – either unintentionally or intentionally enters their confidential information into generative AI that they are intending to share with their attorney or have shared with their attorney and thus puts at risk the attorney-client privilege due to the potential exposures involved.
- Third-party – either unintentionally or intentionally come in contact with confidential information within the generative AI that pertains to an attorney-client relationship as to either data entered by the attorney involved or via the client involved or via other akin means.
We can also consider these facets of generative AI and the attorney-client privilege:
- Exposure As Intended. Generative AI that by intentional design or via denoted licensing allows third-party access to data within the generative AI, overall, and in this use case encompasses data that has been entered by an attorney or a client as part of an attorney-client relationship, or as accessed from an allied app via the API or due to a plugin to the generative AI.
- Unintended Exposure. Generative AI that by error or other unintended aspects exposes data within the generative AI, overall, and in this use case encompasses data that has been entered by an attorney or a client as part of an attorney-client relationship, or as accessed from an allied app via the API or due to a plugin to the generative AI.
- Futuristic Exposure. Generative AI that might at some point in the future be considered for attaining legal personhood and thus could conceivably be construed as a considered legally-based third-party (this is highly speculative, and we’ll need to wait and see).
I don’t want to discourage you from using generative AI. That is assuredly not my point.
Use generative AI to your heart’s content. The crux is that you need to be mindful of how you use it. Find out what kind of licensing stipulations are associated with the usage. Decide whether you can live with those stipulations. If there are avenues to inform the AI maker that you want to invoke certain kinds of added protections or allowances, make sure you do so.
I will also mention one other facet that I realize will get some people boiling mad. Here goes. Despite whatever the licensing stipulations are, you have to also assume that there is a possibility that those requirements might not be fully adhered to. Things can go awry. Stuff can slip between the cracks. In the end, sure, you might have a legal case against an AI maker for not conforming to their stipulations, but that’s somewhat after the horse is already out of the barn.
You might be aware that a recent “bug” in ChatGPT allowed some users to see the prompts and conversations of other users, see my coverage at the link here. This incident reinforces my emphasis that even if the licensing seems agreeable to you, there are still other chances that the generative AI will go awry and allow your entered private and confidential data to escape or be seen.
A potentially highly secure way to proceed would be to set up your own instance on your own systems, whether in the cloud or in-house (and, assuming that you adhere to the proper cybersecurity precautions, which admittedly some do not and they are worse off in their own cloud than using the cloud of the software vendor). A bit of a nagging problem though is that few of the generative AI large-scale apps allow this right now. They are all pretty much working on an our-cloud-only basis. Few have made available the option of having an entire instance carved out just for you. I’ve predicted that we will gradually see this option arising, though at first it will be rather costly and somewhat complicated, see my predictions at the link here.
How An Attorney Can Get Into Hot Water
Consider the creation of legal documents. That’s obviously a particularly serious matter. Words and how they are composed can spell a spirited legal defense or a dismal legal calamity.
In my ongoing research and consulting, I interact regularly with a lot of attorneys that are keenly interested in using AI in the field of law. Various LegalTech programs are getting connected to AI capabilities, see my ongoing coverage at the link here. A lawyer can use generative AI to compose a draft of a contract or compose other legal documents. In addition, if the attorney made an initial draft themselves, they can pass the text over to a generative AI app such as ChatGPT to take a look and see what holes or gaps might be detected. For more about how attorneys and the legal field are opting to make use of AI, see my discussion at the link here.
We are ready though for the rub on this.
An attorney takes a drafted contract that contains client-specific confidential data and copies the text into a prompt for ChatGPT. The AI app produces a review for the lawyer. Turns out that several improvements are uncovered by ChatGPT, thankfully so. The attorney revises the contract. They might also ask ChatGPT to suggest a rewording or redo of the composed text for them. A new and better version of the contract is then produced by the generative AI app. The lawyer grabs up the outputted text and plops it into a word processing file. Off the missive goes to their client. Mission accomplished.
Can you guess what also just happened?
Behind the scenes and underneath the hood, the contract might have been swallowed up like a fish into the mouth of a whale. Though this AI-using attorney might not realize it, the text of the contract, as placed as a prompt into ChatGPT, could potentially get gobbled up by the AI app. It now is fodder for pattern matching and other computational intricacies of the AI app. This in turn could be used in a variety of ways. If there is confidential data in the draft, that too is potentially now within the confines of ChatGPT. Your prompt as provided to the AI app is now ostensibly a part of the collective in one fashion or another. Also, the prompt can presumably be examined by the vendor, as per the stated caution when logging into ChatGPT.
Furthermore, the outputted essay is also considered part of the collective. If you had asked ChatGPT to modify the draft for you and present the new version of the contract, this is construed as an outputted essay. The outputs of ChatGPT are also a type of content that can be retained or otherwise transformed by the AI app.
Yikes, you might have innocently given away private or confidential information. Not good. Plus, you wouldn’t even be aware that you had done so. No flags were raised. A horn didn’t blast. No flashing lights went off to shock you into reality.
We might anticipate that non-lawyers could easily make such a mistake, but for a versed attorney to do the same rookie mistake is nearly unimaginable. Nonetheless, there are likely legal professionals right now making this same potential blunder. They risk violating a noteworthy element of the attorney-client privilege and possibly breaching the American Bar Association (ABA) Model Rules of Professional Conduct (MRPC).
Some attorneys might seek to excuse their transgression by claiming that they aren’t tech wizards and that they would have had no ready means to know that their entering of confidential info into a generative AI app might somehow be a breach of sorts. The ABA has made clear that a duty for lawyers encompasses being up-to-date on AI and technology from a legal perspective: “To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology, engage in continuing study and education and comply with all continuing legal education requirements to which the lawyer is subject” (per MRPC).
Several provisions come into this semblance of legal duty, including maintaining client confidential information (Rule 1.6), protecting client property such as data (Rule 1.15), properly communicating with a client (Rule 1.4), obtaining client informed consent (Rule 1.6), and ensuring competent representation on behalf of a client (Rule 1.1). And there is also the little-known but highly notable AI-focused resolution passed by the ABA: “That the American Bar Association urges courts and lawyers to address the emerging ethical and legal issues related to the usage of artificial intelligence (‘AI’) in the practice of law including: (1) bias, explainability, and transparency of automated decisions made by AI; (2) ethical and beneficial usage of AI; and (3) controls and oversight of AI and the vendors that provide AI.”
Words to the wise for my legal friends and colleagues.
The bottom line of the matter is that just about anyone can get themselves into a jam when using generative AI. Non-lawyers can do so by their presumed lack of legal acumen. Lawyers can do so too, perhaps enamored of the AI or not taking a deep breath and reflecting on what legal repercussions can arise when using generative AI.
We are all potentially in the same boat.
More To Contemplate
When using generative AI such as ChatGPT, suppose an attorney enters private or confidential data about a client. One question arises as to whether a tree that falls in a forest makes a noise if it isn’t actually heard.
Allow me to explain.
Assume that the attorney didn’t realize anything at all about the legal exposures by entering private or confidential data into ChatGPT. They were utterly unaware of this. Meanwhile, assume that the private or confidential data never shows up anywhere, including that nobody at the vendor or affiliated with the vendor perchance sees the private or confidential data.
On the one hand, you might claim that this is a no-harm no-foul. Nothing bad came from the potential exposure. Seems like a compelling argument that nobody heard the tree fall. Thus, it doesn’t matter that the tree fell. A counterargument is that the risk of the data being seen is sufficiently bad and undercuts the attorney-client privilege. Just because the third party hasn’t acted on the data doesn’t mean that the lawyer gets away scot-free.
Mull that over.
Let’s do a bit of a switcheroo.
A client is aiming to provide a written communication to their attorney about a legal matter underway. The client opts to use ChatGPT to help write the communique. During the interactive conversation with the generative AI app, the client enters various private and confidential info that they intend to have included in the essay that is being composed.
Has the client now usurped the attorney-client privilege with respect to whatever private or confidential data is that they are then providing to their attorney?
I bring up that point because it is one thing for attorneys to make sure they don’t break the attorney-client privilege and another thing for them to try and advise their clients on how to not do so too. The client might not have a clue about how their actions can undercut the privilege. They usually are only familiar with the privilege to the degree that their lawyer explains it to them.
Another twist is this. An attorney asks another fellow attorney to help with a legal case. Various client info is provided to the other attorney. This now secondary attorney uses ChatGPT. The first attorney doesn’t realize that the consulting attorney did so. What impact, if any, might this have on the attorney-client privilege at stake here?
We can refer to additional ABA rules that indicate an attorney is to act competently on these matters:
- “Paragraph (c) requires a lawyer to act competently to safeguard information relating to the representation of a client against unauthorized access by third parties and against inadvertent or unauthorized disclosure by the lawyer or other persons who are participating in the representation of the client or who are subject to the lawyer’s supervision” (ABA Rule 1.6, Subsection 18 excerpt).
And this ABA rule also applies:
- “The unauthorized access to, or the inadvertent or unauthorized disclosure of, information relating to the representation of a client does not constitute a violation of paragraph (c) if the lawyer has made reasonable efforts to prevent the access or disclosure. Factors to be considered in determining the reasonableness of the lawyer’s efforts include, but are not limited to, the sensitivity of the information, the likelihood of disclosure if additional safeguards are not employed, the cost of employing additional safeguards, the difficulty of implementing the safeguards, and the extent to which the safeguards adversely affect the lawyer’s ability to represent clients (e.g., by making a device or important piece of software excessively difficult to use)” (ABA Rule 1.6, Subsection 18 excerpt).
Imagine this scenario.
An attorney gets jammed up by having entered client confidential data into ChatGPT. Suppose that at some point, a third-party, let’s say the AI maintenance and training crew of the vendor, sees the confidential data. Seems like the lawyer is in hot water.
They might try to argue that the confidential data wasn’t especially sensitive and thus it doesn’t matter that a third party was able to see it. They might argue that they had taken other safeguards to protect the data, for which this generative AI use was not covered and yet they otherwise were very prudent in being protective. They might try to argue that they ostensibly had to use ChatGPT to properly aid their client. Etc.
A few final thoughts on this topic for now.
In an article entitled “ChatGPT and Ethics: Can Generative AI Break Privilege and Waive Confidentiality?” by attorney Foster J. Sayers provides this crucial insight for lawyers:
- “ChatGPT is a perfect example where the apparent benefits of the technology need to be weighed against the risks associated with using it. Work can be greatly accelerated, but what if the lack of confidentiality exposes your client to an unforeseen risk? You may be able to review a competitive bid for your client more quickly using ChatGPT, but what if the bid ends up being viewed by a QA analyst at OpenAI who has no obligation of confidentiality and whose spouse works for your client’s competitor? It’s the job of an attorney to think through these potential risks and consider whether they can competently employ ChatGPT without compromising their ethical obligations to the client” (January 26, 2023, Law.com).
Attorneys should indeed be encouraged to leverage generative AI, yet do so with prudent legal care.
Here are my handy tips or options on this sage piece of advice:
- Think Before Using Generative AI
- Remove Stuff Beforehand
- Mask Or Fake Your Input
- Setup Your Own Instance
I’ll indicate next what each one of those consists of. The setting up of your own instance was earlier covered herein. The use of “other” in my list is due to the possibility of other ways to cope with preventing confidential data from getting included, which I will be further covering in a future column posting.
Let’s examine these:
- Think Before Using Generative AI. One approach involves avoiding using generative AI altogether. Or at least think twice before you do so. I suppose the safest avenue involves not using these AI apps. But this also seems quite severe and nearly overboard.
- Remove Stuff Beforehand. Another approach consists of removing confidential or private information from whatever you enter as a prompt. In that sense, if you don’t enter it, there isn’t a chance of it getting infused into the Borg. The downside is that maybe the removal of the confidential portion somehow reduces or undercuts what you are trying to get the generative AI to do for you.
- Mask Or Fake Your Inputs. You could modify your proposed text by changing up the info so that whatever seemed confidential or private is now differently portrayed. For example, instead of a contract mentioning the Widget Company and John Smith, you change the text to refer to the Specious Company and Jane Capone. An issue here is whether you’ll do a sufficiently exhaustive job such that all of the confidentially and private aspects are fully altered or faked. It would be easy to miss some of the needed mods and leave in stuff that ought to not be there.
I trust that you will take these precautions to heart.
Those above precautionary snippets are important pieces of advice for attorneys. Those are also important pieces of advice for clients. And, if I might say, those are important pieces of advice that savvy attorneys ought to be mentioning to their clients.
Let’s be full and frank about preserving the attorney-client privilege. No need to let AI get into the middle of this by being sloppy. Make sure that the latest in AI doesn’t undermine one of the oldest and most endearing privileges for protecting confidential communications known to the common law.