
David J. Gengler
dgengler@kmksc.com
(414) 962-5110
As technological advances become more and more prevalent in the workplace, businesses can be faced with difficult choices as to how and when to adapt. For decades, each new advance has come with its own set of benefits and challenges. For example, email revolutionized communication, but these benefits also come with risks. Every organization should have a vigilant email policy to ensure it stays safe from outside attacks (such as phishing schemes) and to ensure internal communications are viewed only by the intended recipients. Few recent advances, however, have brought greater potential for opportunities, and risk, than artificial intelligence (AI).
Over the past several years, AI has seemingly gone from science fiction to a daily reality. Businesses will face many decisions over utilizing AI and how to integrate it into their day-to-day operations. A key component to consider with using AI is how to use it safely and securely, to protect yourself, your customers, and your business partners’ confidentiality.
When communicating with an AI system, you “prompt” the AI with certain questions or inputs, and the AI prepares its responses/outputs for you. There are two basic types of AI services readily available – open-source and closed-source. Open-source AI models are those that are publicly available and can be accessed freely by visiting the related website, app, etc. These are often cheaper than the alternative, closed-source models, but they come with their own risks. Most notably, these services utilize the information/prompts entered as part of their “training” for the entire system, meaning this information entered can be utilized by the AI model in preparing its responses to you and any other users.
By contrast, a closed-source AI restricts public access to your information. This offers higher security, potential for greater customization, and reliability in responses. Certainly, these benefits all come with higher costs, but in many cases, if you want to use AI, the cost may ultimately be unavoidable.
In the legal context, the attorney-client privilege is of paramount importance. Knowing that what you say and share with your attorney is considered privileged helps to ensure effective representation through the opportunity for an honest and forthright discussion. In the AI context, however, new concerns have arisen with respect to ways in which that confidentiality can be, and has been, lost.
In a recent New York federal court case, the district court judge found a defendant waived attorney-client privilege as to certain documents that were generated using AI. In that case, the defendant was indicted on criminal charges and decided to do independent research on the law and his potential defenses through AI. The defendant input information related to his case into Anthropic’s Claude (an open-source model) and shared the outputs from Claude with his attorney in attempting to formulate his defense strategy with his attorney. The court found that because the information was being shared with a third party (rather than the attorney) and the AI program could use that information as part of its training and disclose in certain instances to third parties, that confidentiality had been waived as to those AI-generated documents. These documents were therefore discoverable and needed to be shared with the prosecution in the case.
It should be noted that this case is in many ways a “first of its kind” and the case law in this area is developing. However, this case is an early warning sign to anybody using an open-sourced AI in particular that the information you put into these types of systems may no longer be confidential or protected by attorney-client privilege. Moreover, these AI systems are not lawyers and cannot give legal advice, so when they are used for that purpose the attorney-client privilege is likely going to be deemed waived.
As your organization and employees start utilizing AI, to avoid these types of issues, it is important to have robust protocols in place for whether to use AI in the first place (limiting its use at all), how to use AI (what information to use in prompts or what to expect from the AI outputs), and which AI to use (open-source versus closed-source). If you need assistance in addressing these questions and drafting your office’s own AI use policy, contact KMK Attorneys Melinda Bialzik at mbialzik@kmksc.com or David Gengler at dgengler@kmksc.com or (414) 962-5110.
