Foster J. Sayers III, General Counsel & Chief Evangelist, Pramata Corporation

ChatGPT has taken the world by storm and the legal industry is no exception. While some are feeling breathless at its arrival others have already embraced the technology into the products they provide. What few have raised are the very real ethical concerns that using ChatGPT presents to attorneys. A review of the Model Rules of Professional Conduct, the Terms of Use for ChatGPT and its FAQ, show that attorneys risk ethical violations if they choose to use ChatGPT in providing legal services to their clients.

Most states have adopted a version of the Model Rules of Professional Conduct (MRPC) and require that an attorney maintain the confidentiality of client information. In MRPC 1.6: Confidentiality of Information, the first subsection (a) states, “A lawyer shall not reveal information relating to the representation of a client unless the client gives informed consent, the disclosure is impliedly authorized in order to carry out the representation or the disclosure is permitted by paragraph (b).” Paragraph (b) lists seven exceptions to breaking confidentiality, such as “to prevent reasonably certain death or substantial bodily harm”, but none of them are using technology to do some or all work related to a client representation. As such, any attorney wishing to use ChatGPT to perform services related to the representation of the client must first obtain the client’s consent. Paragraph (c) goes on to state, “A lawyer shall make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client.” This raises a concern given how information in ChatGPT is viewable by more than the user and the AI.

Ethical concerns arise because the conversation with ChatGPT is not merely an exchange with a computer program where there should be an expectation of privacy – human beings review user conversations with ChatGPT. Number 5 on the ChatGPT FAQ is “Who can view my conversations?” The answer is “As part of our commitment to safe and responsible AI, we review conversations to improve our systems and to ensure the content complies with our policies and safety requirements.” Number 6 is “Will you use my conversations for training?” The answer is “Yes. Your conversations may be reviewed by our AI trainers to improve our systems.” If you were to discuss information you did not intend to when providing ChatGPT a prompt, you’re not able to delete it, as noted in the answer to question number 8, “No, we are not able to delete specific prompts from your history. Please don’t share any sensitive information in your conversations.” [emphasis added]. The FAQ makes it clear that there should be no reason to believe your conversation will be kept safe from human eyes.

Ethics rules experts might recall comment 19 to MRPC 1.6 which contemplates entering into a confidentiality agreement to ensure any client information that is transmitted or disclosed will be kept in confidence. From comment 19, “Factors to be considered in determining the reasonableness of the lawyer’s expectation of confidentiality include the sensitivity of the information and the extent to which the privacy of the communication is protected by law or by a confidentiality agreement.” However, in the ChatGPT Terms of Use, there is no protection for confidential information. In section 5, “Confidentiality, Security and Data Protection” it states, “Confidential Information means nonpublic information that OpenAI or its affiliates or third parties designate as confidential or should reasonably be considered confidential under the circumstances, including software, specifications, and other nonpublic business information.” Information provided in your conversations to ChatGPT is excluded from that definition and afforded no protection. This is surely the reason that ChatGPT expressly cautions against sharing sensitive information in your conversations as noted in the FAQ excerpt above.

Attorneys have a professional responsibility to be competent in their practice as established in MRPC 1.1 Competence. Comment 8 to rule 1.1 states that “To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology, engage in continuing study and education and comply with all continuing legal education requirements to which the lawyer is subject.” ChatGPT is a perfect example where the apparent benefits need to be weighed against the risks associated with using it. Work can be greatly accelerated, but what if the lack of confidentiality exposes your client to an unforeseen risk? You reviewed a competitive bid for your client more quickly using ChatGPT, but it ended up being seen by a QA analyst at OpenAI who has no obligation of confidentiality and whose spouse works for your competitor. It’s the job of an attorney to think through these potential risks and consider whether they can competently employ ChatGPT without compromising their ethical obligations to the client.

A breach of client confidentiality is not only an ethical concern, but it also risks a waiver of attorney-client privilege. For attorney-client privilege to be established, the communication must be between the attorney and the client and concern seeking or providing legal advice. To maintain the privilege, the communication must be kept confidential. If the communication is then used in a prompt to ChatGPT then it has not been kept in confidence and this would constitute a waiver of attorney-client privilege. As such, preserving attorney-client privilege is another reason that lawyers need to be extremely cautious about leveraging AI to provide legal services.

There are ways that AI can be used ethically to reduce the time that lawyers spend on work, but most of those are related to automation of administrative tasks. That’s still a great thing. Lawyers need more time to exercise their expertise and so, for example, using AI to automate the storage of executed contracts is a great way to do that. However, encumbering AI with the exercise of their expertise is invariably going to give rise to ethical concerns that will need to be addressed before being adopted. 

About the Author

As Pramata’s general counsel and chief evangelist, Foster Sayers is passionate about using his technical knowledge to help legal professionals be more effective. Previously, Sayers was corporate counsel for Vertafore, where he led the company’s transformation of the contract lifecycle and automated the processes for organizing, storing and digitizing the company’s executed agreements using Pramata. He also co-founded 121Nexus, a business with QR code technology solutions for the pharmaceutical and biomedical industries. Earlier in his career, Sayers held in-house counsel positions for companies in industries such as manufacturing, video and IT. Sayers has a law degree from Florida State University College of Law and a bachelor of arts degree in international politics and Japanese from Penn State University.

Pramata makes contract management radically simple. With Pramata’s end-to-end solution, legal teams can easily and accurately manage the entire contract lifecycle – from request to renewal. Pramata does the heavy lifting to give companies the precise contract insights they need and help legal teams provide unparalleled value to the business.