• FOR THE PUBLIC
    • LICENSING & COMPLIANCE
    • NEWS & PUBLICATIONS
    • ABOUT US

    GUIDANCE FOR THE USE OF GENERATIVE ARTIFICIAL INTELLIGENCE IN THE PRACTICE OF LAW IN ARIZONA

    INTRODUCTION

    The term “artificial intelligence” or “AI” has been defined statutorily as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.”[1] Further, the term “generative AI” refers to deep-learning models that can generate high-quality text, images, and other content based on the data they were trained on.[2] Generative AI can be a powerful tool with broad applications in legal practice and administrative functions across law firms, legal service organizations, in-house legal departments, and government legal professionals of all sizes and practice areas. Like any technology, its use must align with a legal professional’s professional responsibilities, including the Arizona Rules of Professional Conduct and Arizona Supreme Court Rules. Legal professionals must understand the risks and benefits associated with generative AI in connection with providing legal services. The impact of these professional responsibilities depends on various factors, including the nature of the client, the matter, the practice area, firm size, and the generative AI tools utilized, which can range from publicly available models to proprietary solutions.

    Generative AI presents unique challenges for legal professionals: it relies on vast data sources, multiple competing models exist, and even AI developers lack full transparency on its mechanisms. Additionally, generative AI’s ability to generate confident responses may lead to undue reliance on its outputs. Legal professionals must assess these risks before integrating generative AI into their practice.

    The State Bar of Arizona acknowledges generative AI’s potential to enhance legal practice, but such use must align with ethical responsibilities. Legal professionals must exercise caution, independent judgment, and verification when integrating generative AI into legal work. This Practical Guidance seeks to provide a framework for responsible generative AI use in the Arizona legal profession.

    PRACTICAL GUIDANCE

    Duty of Confidentiality

    • Applicable Authorities: A.R.S. § 12-2234; Rule 42, ER 1.6; ER 1.8(b)

    Generative AI systems may process, store, and potentially share inputted data, including prompts, documents, or legal queries. Even when a generative AI system does not directly share information, it may lack adequate security protections. Among key confidentiality considerations -

    • A legal professional must not enter any confidential client information into generative AI platforms unless adequate safeguards are in place.
    • A legal professional must employ adequate safeguards whenever using generative AI, including storing data, information, and recordings on encrypted, access-controlled generative AI platforms.
    • Legal professionals must consider generative AI as any other third party with whom they may share client information, omitting any identifying information. For example, client information should be anonymized when using publicly available generative AI platforms, and legal professionals should limit and/or avoid inputting details that could identify the client.
    • Legal professionals should consult IT and cybersecurity experts to ensure that generative AI systems meet stringent security, confidentiality, and data retention protocols.
    • A legal professional must review applicable terms governing any generative AI tool to ensure it does not share input data with third parties or use it for training or improving its model.

    Duties of Competence and Diligence

    • Applicable Authorities: Rule 42, ER 1.1; ER 1.3

    Generative AI outputs may contain hallucinations, or other false, inaccurate, or biased information. Among key competence and diligence considerations -

    • Legal professionals must understand the functionality, limitations, and risks of generative AI before use.
    • Legal professionals must be aware that generative AI-generated results are not a substitute for the legal professional’s judgment.
    • All generative AI-generated content must be independently verified for accuracy and bias.
    • Legal professionals must critically review and refine AI-generated legal research, citations, arguments, and documents before submission.
    • The duty of competence requires more than just detecting errors—it requires active legal analysis and validation of generative AI-assisted work product.

    Duty to Supervise Lawyers and Nonlawyers

    • Applicable Authorities: Rule 42, ER 5.1; ER 5.2; ER 5.3

    Legal professionals, law firms, legal service organizations, in-house legal departments, and government legal professionals shall make reasonable efforts to remain vigilant, updating generative AI systems and practices as needed to address emerging ethical issues. This includes staying abreast of regulatory changes, technological advancements, and evolving ethical guidelines to ensure ongoing compliance and responsible generative AI use. Among key supervision considerations -

    • Law firms, legal service organizations, in-house legal departments, and government legal professionals should establish clear AI policies, conduct training, and monitor generative AI usage among attorneys and support staff. Supervisory legal professionals should ensure firm/department-wide compliance with such generative AI usage policies.
    • Supervised legal professionals should not use generative AI in a way that violates legal professional responsibilities, even if directed by a supervising attorney.

    Communication Regarding Generative AI Use

    • Applicable Authorities: Rule 42, ER 1.4; ER 1.2

    Clients should at a minimum be informed about, and preferably provide consent, to generative AI usage whenever it impacts their representation. Among key communications considerations -

    • If utilizing generative AI to record a client interaction, a legal professional must clearly inform and obtain consent from clients whenever the meeting will be recorded and how generative AI will process the recording.
    • A legal professional should review any client instructions or contractual restrictions that may limit generative AI usage.
    • When using a generative AI recording tool, legal professionals may offer a transcript or summary of the recording to the client for transparency.
    • A legal professional should allow clients to opt-out or request deletion of an AI generated summary of the recording of a client meeting if desired.

    Charging for Work Performed with Generative AI and AI-Related Costs

    • Applicable Authorities: Rule 42, ER 1.5

    A legal professional may use generative AI to create work product more efficiently and may charge for services provided (e.g., crafting or refining generative AI inputs and prompts, or reviewing and editing generative AI outputs). Among key economic considerations -

    • Legal professionals may charge for crafting and refining generative AI inputs and outputs, and reviewing, and editing generative AI-generated content. Such charges may be structured similarly to legal research tools (i.e., Westlaw, Lexis, etc.).
    • Legal professionals should ensure that fees charged for any services provided with generative AI work are reasonable in light of the scope of such services and other factors set forth in ER 1.5.
    • Any generative AI-related costs should be clearly disclosed in client fee agreements.

    Candor to the Tribunal and Meritorious Claims

    • Applicable Authorities: Rule 42, ER 3.1; ER 3.3

    Legal professionals using AI must ensure that its outputs do not mislead the tribunal and that all claims are and remain meritorious. Among key duty to legal tribunal considerations -

    • Legal professionals must independently verify all generative AI-generated citations, arguments, and factual statements before submitting them to a court or administrative body.
    • Any jurisdiction-specific rules on generative AI disclosure should be reviewed before using generative AI in legal pleadings or filings.
    • Legal professionals should stay apprised of any local rules and/or requirements of individual courts or judges regarding use of generative AI in such tribunals.

    Prohibition on Discrimination, Harassment, and Retaliation

    • Applicable Authorities: Rule 42, ER 8.4; Arizona Supreme Court Rule 41(c)

    Generative AI models may reinforce existing biases, affecting client screening, employment decisions, or legal outcomes. Among key bias considerations -

    • Legal professionals should continuously educate themselves on generative AI bias and its ethical implications.
    • Law firms, legal service organizations, in-house legal departments, and government legal professionals should implement mechanisms to detect and mitigate generative AI bias in legal practice.
    • Regular review of generative AI outputs and/or audits can help detect biases or errors, ensuring that generative AI outputs do not compromise client confidentiality or the quality of legal services.

    CONCLUSIONS

    The State Bar of Arizona recognizes the potential of generative AI to enhance efficiency in legal practice. Legal professionals must exercise caution, critical analysis, and independent judgment when integrating AI into their practice in order to comply with professional responsibility obligations. This Practical Guidance serves as a framework for legal professionals to ensure compliance with ethical and legal standards in Arizona while leveraging generative AI responsibly.

     

     

    [1] [1] U.S. Code Title 15. Commerce and Trade § 9401.

    [2] See https://research.ibm.com/blog/what-is-generative-AI


    Download this document here: AI Guidance

    This page is managed by Jennifer Fite