">
CONFIDENTIAL // Skribe Intelligence Division — Deposition Intelligence Briefing
Field Manual // Chapter 16
Chapter 16 — Ethics & Best Practices

Ethics & AI Best Practices for Deposition Practice

🤖
REFERENCE — Applicable to All AI Usage

Review these ethical guidelines before implementing any AI workflow in your deposition practice.

Chapter 7: Ethics, Security & AI Best Practices for Litigation

Chapter ID: ch7 | Field Manual Section: Professional Practice Standards

Comprehensive guidance on ethical AI use, data security, disclosure obligations, and compliance with professional responsibility rules in litigation practice.

Confidentiality & Privilege Protection

The attorney-client privilege and work product protection are fundamental to legal practice. Using AI tools requires careful consideration of what information enters third-party systems. Any information uploaded to a cloud-based AI platform without appropriate safeguards risks waiving privilege.

Key Considerations: Before using AI, determine whether the communication or work product qualifies for privilege protection. If it does, ensure the AI platform offers data confidentiality guarantees (such as enterprise agreements that exclude training data usage). Anonymization—removing identifying details about clients, opposing parties, and case facts—provides an additional layer of protection.

⚖ Ethics Note: ABA Model Rule 1.6 Confidentiality

Attorneys must not reveal information relating to client representation without informed consent, except as permitted or required by the rules. When using AI tools, this obligation extends to ensuring that third-party platforms do not retain or use client data for their own purposes. Enterprise agreements with AI providers should explicitly prohibit training on your data.

Anonymization Techniques

Effective anonymization removes or replaces identifying information while preserving the analytical value of the content. Replace party names with roles (Plaintiff, Defendant, Third Party), remove specific dates (substitute "Day 1," "Week 2"), redact document numbers, exclude email addresses, and use generic location descriptions when geography is not material to the analysis.

⚡ The Situation

You're preparing deposition testimony from a plaintiff who disclosed prescription opioid use during recovery, which the defendant will weaponize to argue 'plaintiff is drug-seeking.' You must help the plaintiff anonymize sensitive medical information in her deposition testimony while preserving the core narrative of pain and treatment compliance.

⚖ Advocacy Principle
Anonymization techniques in deposition contexts focus on protecting privacy without sacrificing credibility—NITA methodology teaches that generalized language ('I took prescribed pain medication') is often more persuasive than detailed regimens, while avoiding the appearance of evasiveness.
Prompt 7.1: Anonymize Deposition Testimony for AI Analysis
I need to anonymize the following deposition transcript for analysis by an AI tool, while preserving the substantive testimony and logical flow. Replace all proper names with roles (Witness, Plaintiff Counsel, Defense Counsel), remove specific dates (substitute ordinal references like "Month 1," "Week 2"), redact deposition exhibit numbers, and remove identifying details like home addresses, phone numbers, and email addresses. Generic location descriptions (e.g., "the workplace") may replace specific venues if geography is not material to the testimony. [PASTE DEPOSITION TRANSCRIPT] Provide the anonymized transcript with a separate key showing the mapping of original identifiers to anonymized references.

Privilege-Safe Workflows

Establish clear protocols: (1) determine whether content is privileged before uploading; (2) use only enterprise AI platforms with data confidentiality agreements; (3) maintain a separate log of what information was shared with which platforms; (4) never upload original documents—anonymize first; (5) document the business rationale for AI use as part of the legal strategy.

⚡ The Situation

Your paralegal is organizing 2,000 pages of document review for privilege log creation. She notices three 'draft' documents that appear to be attorney work product but might be discoverable if they're business records merely copied to counsel. She must identify privileged materials while preserving attorney-client privilege against inadvertent waiver.

⚖ Advocacy Principle
Privilege-safe document workflows require clear protocols for identifying and segregating privileged materials—the foundational commandments emphasizes that privilege logs must distinguish attorney work product (opinions, analysis) from business records sent to counsel (which remain discoverable), with clear segregation preventing accidental production.
Prompt 7.2: Generate Privilege Waiver Risk Assessment
Review the following work product and assess whether uploading anonymized excerpts to an external AI tool would waive attorney-client privilege or work product protection. Consider: (1) whether the content contains legal advice or attorney strategy; (2) whether anonymization removes sufficient identifying information to prevent waiver; (3) whether the AI platform has enterprise confidentiality agreements; (4) applicable state law on privilege waiver by disclosure to third parties. [PASTE WORK PRODUCT] Provide a risk assessment in memo format with specific recommendations on which portions may be safely shared and which should be analyzed without external AI tools.

AI Output Verification & Hallucination Detection

AI systems, including advanced models, can generate plausible-sounding but factually incorrect information—a phenomenon known as "hallucination." In litigation, using unverified AI output can lead to sanctions, ethical violations, and case dismissals. The Mata v. Avianca Airways Corp. case (2023) exemplifies the danger: an attorney submitted a brief citing non-existent case law generated by ChatGPT, resulting in Rule 11 sanctions.

Verification Protocol: Treat all AI-generated citations, legal analysis, and factual claims as unverified. Cross-check every citation against primary sources (Westlaw, LexisNexis, Clio, or court websites), verify factual assertions independently, and confirm that AI-recommended legal arguments align with current case law.

⚖ Ethics Note: ABA Model Rule 3.3 Candor to the Tribunal

An attorney shall not knowingly make a false statement of fact or law to a tribunal. This obligation applies whether the false statement originates from the attorney or is generated by an AI tool used in case preparation. Submitting unverified AI output without disclosure violates this rule and can trigger sanctions under Federal Rule of Civil Procedure 11.

⚡ The Situation

This prompt focuses on ethical deposition preparation protocols ensuring witness credibility is preserved and testimony accuracy is prioritized.

⚖ Advocacy Principle
Ethical witness preparation balances advocacy with truthfulness—NITA methodology teaches that instructing witnesses on 'how to testify' (pause, think, answer specifically) is appropriate; instructing them what answers to give is not.
Prompt 7.3: Systematic Citation Verification Checklist
Create a structured checklist for verifying AI-generated legal citations before use in court filings. The checklist should cover: (1) confirming the case exists and the citation format is correct; (2) verifying the holding cited matches the opinion text; (3) checking whether the case has been overruled, reversed, or limited by subsequent decisions; (4) confirming the court level and jurisdiction; (5) checking the publication date and procedural posture; (6) noting whether the case involves analogous facts or distinct circumstances. Format as an Excel-compatible table with columns for Citation, Holding Cited, Verification Status, Current Status, Notes, and Cleared for Filing. Include a row for each AI-provided citation that must be individually verified.
⚡ The Situation

This prompt addresses conflicts between client confidentiality and deposition candor when the client's narrative may harm their case.

⚖ Advocacy Principle
Conflicts between client interests and case strategy are resolved by frank client counseling and documentation of client choices—established advocacy principles emphasize that you must advise clients of risks but ultimately respect their litigation decisions unless they exceed ethical bounds.
Prompt 7.4: Identify Hallucinations in AI Legal Analysis
Review the following AI-generated legal analysis and identify red flags indicating potential hallucination or unverified claims: [PASTE AI ANALYSIS] Specifically flag: (1) citations that sound plausible but may not exist; (2) factual claims unsupported by cited authorities; (3) legal principles stated without case support; (4) circuit splits or conflicting authorities presented without reconciliation; (5) procedural statements that contradict known rules; (6) anachronistic references (e.g., citing pre-2020 law as current). Provide a marked-up version with specific verification tasks needed before the analysis can be used in any court filing.

Ethical Obligations When Using AI in Litigation

Using AI in litigation triggers several overlapping professional responsibility obligations. The ABA has issued guidance confirming that technology competence is now a baseline requirement under Model Rule 1.1 (Competence). Multiple state bars have published ethics opinions addressing AI disclosure, data security, and the duty to understand AI limitations.

Primary Obligations:

  • ABA Model Rule 1.1 requires competence, including understanding AI capabilities and limitations
  • ABA Model Rule 1.6 protects client confidentiality even when using AI tools
  • ABA Model Rule 3.3 prohibits making false statements to courts, whether AI-generated or not
  • ABA Model Rule 5.1 requires managing lawyers to ensure non-lawyer technology use complies with ethics rules
⚖ Ethics Note: Competence and Technology Literacy

The ABA Standing Committee on Ethics and Professional Responsibility (Formal Opinion 512) clarifies that Model Rule 1.1 now requires lawyers to understand the benefits and risks of technology, including AI. Ignorance of AI limitations is not a defense against sanctions or malpractice claims. Staying informed about AI developments through CLE, bar association guidance, and vendor documentation is a professional obligation.

⚡ The Situation

This prompt covers ethical impeachment preparation when your own witness's prior deposition contradicts his trial testimony.

⚖ Advocacy Principle
When your own witness contradicts his deposition, you must address it directly rather than hoping opposing counsel doesn't notice—leading cross-examination teach that 'I testified differently at my deposition, but now I remember more clearly' is far more credible than being impeached on cross.
Prompt 7.5: Generate ABA Model Rule Compliance Memo
Draft a memo to our litigation team addressing compliance with ABA Model Rules 1.1, 1.6, 3.3, and 5.1 in the context of AI tool usage. The memo should: (1) summarize each rule and its application to AI; (2) identify specific practices in our office that implicate each rule; (3) recommend policies and procedures to ensure compliance; (4) address disclosure obligations to clients and courts; (5) explain consequences of non-compliance, including sanctions and malpractice exposure. The memo should be written for a non-technical audience (partners and of counsel) and be accessible to attorneys with limited AI background.
⚡ The Situation

This prompt addresses the ethics of using deposition testimony from a plaintiff who later claims her discovery responses were coerced or inaccurate.

⚖ Advocacy Principle
Discovery responses obtained under threat or duress may be inadmissible and create sanctions risks—NITA methodology teaches that contemporaneous documentation of a witness's voluntary cooperation and understanding is essential if later challenges arise.
Prompt 7.6: Research Applicable State Bar Ethics Opinions on AI
Compile a summary of formal ethics opinions issued by the [STATE] State Bar (and any applicable federal circuit guidelines) regarding the use of AI tools in legal practice. For each opinion, provide: (1) the opinion number and date; (2) the key question addressed; (3) the ruling or guidance provided; (4) specific practices the opinion permits or prohibits; (5) any disclosure or consent requirements; (6) how the opinion addresses data confidentiality. Format as a reference table with columns for Opinion Number, Issue, Key Ruling, and Application to Litigation Practice. Flag any inconsistencies with ABA guidance.

Not all AI platforms are appropriate for legal work. Consumer-grade tools (ChatGPT free tier, public Copilot, Gemini) typically retain user inputs and may use submissions for model training. Enterprise-grade platforms offer data confidentiality agreements, SOC 2 compliance, and contractual guarantees of data non-retention.

Platform Tiers:

  • Consumer (Not Recommended): Free ChatGPT, standard Copilot—data retention unknown, training use possible, no DPA
  • Professional (Conditional): ChatGPT Plus, Microsoft Copilot Pro—improved privacy but still not designed for confidential legal work
  • Enterprise (Recommended): ChatGPT Enterprise, Azure OpenAI, Claude API—contractual data confidentiality, SOC 2 Type II, no training use, dedicated infrastructure
  • Specialized Legal AI: Harvey, Legora, and similar legal AI platforms—built for legal practice, privilege-aware, compliant with legal ethics
⚖ Ethics Note: Duty of Reasonable Care in Platform Selection

Selecting an insufficiently secure AI platform to save cost may constitute negligence or a breach of the confidentiality duty. Using a consumer tool for privileged work without an enterprise agreement can waive privilege. Document your platform selection process, including security assessment and compliance verification, as evidence of reasonable care.

⚡ The Situation

This prompt covers the ethical boundaries of using social media information obtained without the opposing party's knowledge to impeach witness testimony.

⚖ Advocacy Principle
Publicly available social media can be used for impeachment without violating ethics rules, but 'friend' requests or fake profiles that misrepresent identity cross ethical lines—the foundational commandments advises consulting state ethics opinions before using social media as a deposition impeachment tool.
Prompt 7.7: Create AI Platform Security Assessment Template
Create a due-diligence checklist for evaluating whether an AI platform is secure enough for confidential legal work. The checklist should assess: (1) data retention policies (whether user inputs are retained or used for training); (2) contractual data protection agreements (DPA, BAA if applicable); (3) compliance certifications (SOC 2, ISO 27001, HIPAA if health data is involved); (4) encryption in transit and at rest; (5) access controls and audit logs; (6) incident response procedures; (7) geographic data residency options; (8) vendor's liability limits and insurance. For each item, specify: Required for Legal Use (Yes/No), How to Verify, and Red Flags. Include a summary scoring system (Low/Medium/High security) to guide firm policy on which tools can be used for which work.
⚡ The Situation

This prompt addresses conflicts when a testifying witness is also a party's employee whose deposition may expose corporate liability.

⚖ Advocacy Principle
Employees testifying in corporate cases must understand that their testimony may harm both employer and themselves—NITA methodology teaches that independent counsel should advise employees about the risks of corporate representation before their deposition.
Prompt 7.8: Draft Data Processing Agreement (DPA) Requirements Memo
Draft a memo to firm management addressing essential terms for Data Processing Agreements (DPAs) with AI vendors. The memo should: (1) explain what a DPA is and why it's legally necessary; (2) identify key protections a DPA must provide (no training use, no sharing with third parties, encryption, incident notification); (3) recommend standard contract language for firm DPAs; (4) identify red flags in vendor DPA proposals (e.g., unilateral termination rights, unlimited liability caps); (5) outline the approval process for AI vendors before firm-wide use. Include a checklist for in-house counsel to use when reviewing AI vendor agreements.

Well-crafted prompts significantly improve AI output quality, reducing hallucinations and improving legal accuracy. Specific prompting techniques—such as requesting step-by-step reasoning, requiring citations, and specifying applicable law—produce more reliable results than vague requests.

Key Techniques: Be specific about jurisdiction and applicable law; request citations for all legal claims; ask for step-by-step reasoning to reveal flawed logic; use "break down" and "explain" rather than "summarize"; specify the intended use (e.g., "for a court filing" signals higher accuracy standards); ask the AI to identify uncertainties and limitations in its response.

⚡ The Situation

This prompt covers ethical obligations when deposition discovery reveals evidence of crimes (child abuse, fraud, environmental violations) that may trigger mandatory reporting duties.

⚖ Advocacy Principle
Deposition testimony revealing ongoing crimes may trigger ethical duties to report to authorities despite attorney-client privilege concerns—trial advocacy training advises consulting ethics counsel immediately if deposition testimony suggests ongoing harm.
Prompt 7.9: Analyze Legal Standard with Citations and Step-by-Step Reasoning
Provide a comprehensive analysis of [LEGAL STANDARD/BURDEN OF PROOF] under [STATE] law and [APPLICABLE FEDERAL LAW]. Your analysis should: 1. State the legal standard precisely, with citations to statute and leading cases 2. Break down each element required to satisfy the standard (numbered list) 3. For each element, provide the most important controlling authority (cite case name, court, year, and the specific holding) 4. Identify any circuit splits, competing interpretations, or evolving standards 5. Explain any recent changes to the standard (case law or statutory amendments in the past 5 years) 6. Acknowledge any uncertainties, areas where courts disagree, or limitations in your analysis Format as a structured outline suitable for direct use in a legal memo. Flag any claims that require independent verification before use in a court filing.
⚡ The Situation

This prompt addresses the ethics of videotaping depositions when state law may restrict recording or when a witness objects.

⚖ Advocacy Principle
Deposition recording rules vary by jurisdiction and can trigger ethical violations if local rules are ignored—NITA methodology teaches that written stipulations agreeing to recording should precede any taping, avoiding later disputes about admissibility.
Prompt 7.10: Generate Comparative Legal Analysis with Confidence Scores
Compare the following two legal theories: [THEORY A] and [THEORY B] under [STATE] law in the context of [CASE TYPE]. For each theory, provide: 1. The controlling case law and statutes 2. The factual elements that must be proven 3. Recent case outcomes applying each theory (last 5 years, if available) 4. Likelihood of success based on current case law (High/Medium/Low, with brief explanation) 5. Key vulnerabilities or counterarguments 6. Which theory is more favorable to our client For each claim and citation, indicate your confidence level (High/Medium/Low) and specify what additional research or verification would be needed before relying on this analysis in a court filing.

Disclosure Requirements in Court Filings and to Opposing Counsel

Courts increasingly require disclosure of AI use in litigation. Some jurisdictions have local rules requiring identification of AI-generated content; others expect disclosure in attorney certifications. Opposing counsel may raise ethical objections if AI use appears to be hidden. Transparency about AI use, when paired with clear verification protocols, demonstrates competence and good faith.

Disclosure Best Practices: Include a statement in engagement letters that AI may be used for research and analysis; notify opposing counsel when AI is material to your legal strategy; disclose AI use in court filings if local rules require it or if AI-generated content is substantive; maintain documentation of what AI was used and how output was verified.

⚖ Ethics Note: Transparency and Candor

While no universal rule requires disclosure of all AI use, transparency about material AI reliance demonstrates candor and reduces the risk of ethical challenges. If opposing counsel discovers undisclosed AI use, it can trigger questions about competence and good faith. Proactive disclosure—especially for novel or high-stakes analysis—is the safer approach and often improves client relationships.

⚡ The Situation

This prompt covers conflicts arising when a witness's deposition testimony implicates a prior attorney-client relationship or reveals privileged information.

⚖ Advocacy Principle
Witnesses may inadvertently reveal information about prior attorney relationships; you must stop deposition to address privilege claims before testimony proceeds—the foundational commandments advises that the questioning attorney should note the claim, cease questioning, and seek court direction.
Prompt 7.11: Draft AI Disclosure Statement for Court Filing
Draft a disclosure statement for inclusion in a court filing regarding AI use in case preparation. The statement should: (1) identify which AI tools were used and for what purpose (research, analysis, document review); (2) explain how output was verified and integrated into attorney analysis; (3) affirm that the filing complies with all applicable ethics rules and court orders; (4) confirm that no AI-generated content was included without attorney review and approval; (5) explain the qualifications of the attorney who reviewed AI output. The statement should be professional, brief (2-3 paragraphs), and reassure the court that AI use was methodical and compliant. Format it as suitable for inclusion in a Declaration or Certificate of Counsel.

AI-Assisted Research Verification Against Westlaw, LexisNexis, or Clio

AI legal research tools can be fast but are not independently reliable. Always cross-check AI-identified cases and legal principles against authoritative sources (Westlaw, LexisNexis, or Clio). AI may miss recent overruling decisions, misstate holdings, or conflate different cases.

Verification Workflow: (1) Have AI identify relevant cases and legal standards; (2) search Westlaw, LexisNexis, or Clio independently using the same search terms; (3) compare AI-identified cases against your own search results; (4) verify each case citation, holding, and court; (5) check the treatment of each case (status indicators in Westlaw/Lexis/Clio); (6) verify current validity using KeyCite or Shepardize; (7) document any discrepancies between AI and authoritative sources.

⚡ The Situation

This prompt addresses the ethics of mischaracterizing a witness's deposition testimony in trial or settlement negotiations.

⚖ Advocacy Principle
Misrepresenting deposition content to opposing counsel or the court violates professional responsibility rules and can result in sanctions—NITA methodology teaches that quoting deposition testimony out of context is ethically indistinguishable from fabricating quotes.
Prompt 7.12: Create Comparative Research Audit Template
Create a template for documenting AI legal research and comparing it against independent Westlaw, LexisNexis, or Clio searches. The template should track: (1) the legal issue researched; (2) the AI research request (query used); (3) cases identified by AI; (4) cases identified independently via Westlaw, LexisNexis, or Clio using identical search terms; (5) comparison: cases found by both sources vs. cases unique to AI vs. cases unique to traditional research platforms; (6) any cases AI missed that are highly relevant; (7) verification of each case's current status (valid law, overruled, limited); (8) final conclusion on whether AI research was reliable and complete. Format as an Excel workbook with separate tabs for Issue, AI Results, Westlaw/Lexis/Clio Results, Comparison, and Audit Notes. Include a summary section assessing research quality.

Client Communication About AI Usage

Informed consent is a cornerstone of ethical AI use in litigation. Clients should understand that AI will be used in their representation, what AI will be used for, and how the output will be verified. This discussion should be documented in the engagement letter and revisited if AI use expands or if concerns arise.

Engagement Letter Provisions: Include language confirming that AI tools may be used for research, document analysis, and other tasks; specify that output will be reviewed by qualified attorneys; confirm that client data will be protected by confidentiality agreements; explain any limitations or risks of AI (such as the possibility of hallucinations); allow the client to opt out of AI use if desired.

⚡ The Situation

This prompt covers ethical obligations when a party's counsel attempts to 'coach' an opposing party's witness during a deposition break.

⚖ Advocacy Principle
Coaching opposing party's witnesses during deposition breaks is permissible if the witness requests counsel (especially if unrepresented), but instructing witnesses how to answer is not—trial advocacy training advises that you should object to opposing counsel's conduct and create a record if improper coaching is suspected.
Prompt 7.13: Draft AI Disclosure and Consent Provisions for Engagement Letter
Draft comprehensive AI disclosure and consent provisions for our standard engagement letter in litigation matters. The provisions should: (1) explain that AI tools will be used for legal research, factual analysis, and case strategy; (2) identify specific tools by category (legal research AI, document review AI, writing assistance); (3) confirm that all AI output will be reviewed by qualified attorneys before use; (4) describe how client data will be protected (encryption, confidentiality agreements, no training use); (5) acknowledge that AI has limitations, including the possibility of generating inaccurate information; (6) allow the client to request that certain sensitive information not be processed by AI; (7) reserve the right to discuss AI use with the court or opposing counsel if required by ethics rules or court order. The language should be clear and accessible to non-lawyer clients while providing legal protection for the firm.

Avoiding Unauthorized Practice of Law Through AI

AI tools can generate plausible legal advice. If a firm uses AI to provide legal advice to non-clients (such as prospective clients, opposing parties' representatives, or the general public), it risks unauthorized practice claims and disciplinary action. The line between providing information and providing legal advice is fact-dependent and jurisdiction-specific.

Risk Areas: Publishing AI-generated legal analysis on the firm website without attorney review; using AI chatbots to answer legal questions on the firm website; providing AI-assisted analysis to non-client business partners or third parties; marketing AI-powered legal tools to the public without attorney supervision.

⚡ The Situation

This prompt addresses conflicts when your client's deposition testimony contradicts her trial testimony and opposing counsel seeks to use the inconsistency for impeachment.

⚖ Advocacy Principle
Clients changing testimony between deposition and trial face credibility destruction that no attorney advocacy can cure—leading cross-examination teach that the only salvageable response is 'I was wrong before; here's why I now know the truth,' but even that rarely overcomes jury skepticism.
Prompt 7.14: Assess Unauthorized Practice Risk in AI Deployment
Review the following proposed use of AI in the firm's client-facing or public-facing operations and assess the risk that it constitutes unauthorized practice of law: [DESCRIBE PROPOSED AI DEPLOYMENT: e.g., website chatbot, client portal, public legal guidance tool] Analyze: (1) whether the AI output constitutes "legal advice" under [STATE] law; (2) whether output will be reviewed by a licensed attorney before reaching the recipient; (3) whether the tool targets clients (permitted) vs. non-clients (riskier); (4) whether disclaimers adequately limit the scope of the tool; (5) any state bar opinions or ethics rules addressing similar tools; (6) comparable tools used by competitors and their risk profile. Provide a risk assessment (Low/Medium/High unauthorized practice risk) with specific recommendations to mitigate risk through attorney review, disclaimers, or operational changes.

Document Redaction Protocols Before AI Upload

Before uploading any document to an AI platform, implement a systematic redaction process to remove or mask sensitive information. Even with anonymization, documents may contain identifying details that, when combined, could reveal client identity or privileged content.

Information to Redact: Client names and contact information; opposing party names and details; witness names and addresses; document identifiers (Bates numbers, deposition exhibit numbers); specific dates (substitute with ordinal references); account numbers and financial identifiers; medical information or disability details; sealed or confidential case details; any reference to settlement demands or confidential business information.

⚡ The Situation

This prompt covers the ethics of withholding damaging deposition testimony from your own client to avoid client knowledge that might expose you to claims you concealed evidence.

⚖ Advocacy Principle
You must provide clients with complete discovery, including damaging deposition testimony, to allow informed litigation decisions—the foundational commandments emphasizes that hiding deposition admissions from your client violates your duty of candor and creates malpractice exposure.
Prompt 7.15: Create Document Redaction Checklist for AI Upload
Create a detailed pre-upload redaction checklist for documents being prepared for analysis by external AI tools. The checklist should include categories of information to review and redact: 1. Party Identification (client name, opposing parties, third parties) 2. Individual Details (addresses, phone, email, social security numbers, dates of birth) 3. Financial Information (account numbers, transaction amounts, settlement demands) 4. Medical/Personal (health conditions, disabilities, personal facts unrelated to legal issue) 5. Sensitive Business (trade secrets, pricing, confidential strategies) 6. Document Markers (Bates numbers, deposition exhibit labels, document dates) 7. Procedural Details (court docket numbers, sealed order references, confidential case information) 8. Privilege-Sensitive Content (attorney strategy, legal opinions, work product) For each category, provide: (1) the type of information to identify; (2) recommended redaction approach (replace with generic term or remove entirely); (3) verification that redaction preserves analytical value; (4) examples of acceptable and unacceptable redactions. Include a sign-off section where the responsible attorney certifies that redaction is complete and the document is safe for external AI analysis.

AI Bias Awareness and Mitigation

AI systems can exhibit bias in multiple ways: training data may reflect historical discrimination; AI may weight certain arguments more heavily for certain demographics; recommendations may be influenced by the prevalence of certain cases in training data. In litigation, AI bias can lead to overlooked defenses, ineffective strategies, or ethical missteps.

Bias Vectors: Racial and gender bias in case outcomes (AI may predict outcomes based on demographic patterns rather than legal analysis); socioeconomic bias (AI trained on cases involving wealthy parties may not properly analyze cases involving poor litigants); jurisdictional bias (AI trained primarily on federal cases may misapply state law); procedural bias (AI may overweight certain procedural objections based on their prevalence in training data rather than merit).

⚖ Ethics Note: Competence and Bias Mitigation

Understanding and mitigating AI bias is part of the duty of competence. Using AI-generated analysis without considering potential bias could result in inadequate representation, especially if the analysis disadvantages a protected class of clients. Document your bias review process as evidence of competent, ethical use of AI.

⚡ The Situation

This prompt addresses conflicts when a deposition produces testimony that contradicts your client's prior factual statements to you.

⚖ Advocacy Principle
When client statements contradict deposition testimony, you must address the contradiction with the client before trial to understand whether the client misled you or whether memory has been refreshed—NITA methodology teaches that client misstatements require reconsideration of trial strategy and possible withdrawal.
Prompt 7.16: Evaluate AI Analysis for Potential Bias
Review the following AI-generated legal analysis and evaluate it for potential bias. Consider: [PASTE AI ANALYSIS] 1. Demographic Assumptions: Does the analysis make assumptions about any party's race, gender, age, socioeconomic status, or other protected characteristic? Are any of these assumptions material to the legal conclusion? 2. Outcome Predictions: If the AI predicted a trial outcome, case valuation, or settlement likelihood, assess whether the prediction may be influenced by historical patterns in the training data rather than the legal merits. 3. Evidentiary Weight: Does the analysis weight certain types of evidence (e.g., expert testimony, witness credibility) differently based on how often such evidence appears in the training data? 4. Procedural Arguments: Are certain procedural defenses emphasized because they are common in training data, regardless of applicability to this case? 5. Jurisdiction-Specific Law: If the analysis applies law from a different jurisdiction, could the mismatch introduce bias? Provide a summary of potential biases identified and specific recommendations to mitigate them through attorney review and additional research.

Record-Keeping for AI-Assisted Work

Document every use of AI in case preparation. This documentation serves multiple purposes: it creates a record of attorney review (defending against claims of negligent AI use), it supports privilege claims (showing that AI was used as a tool under attorney direction), and it facilitates future challenges by opposing counsel (demonstrating that the analysis is reliable and verified).

What to Document: The date and time AI was used; the specific tool and version; the input provided to the AI (summary, not full text if privileged); the output generated; how the output was reviewed and verified; what conclusions were incorporated into the work product; any limitations or hallucinations identified; the attorney who reviewed the output.

⚡ The Situation

This prompt covers ethical obligations when opposing counsel submits a deposition transcript with apparent errors or omissions, and your client benefits from those errors.

⚖ Advocacy Principle
Notifying opposing counsel of apparent deposition transcript errors is ethically required even if the errors benefit your client—trial advocacy training advises that remaining silent about obvious transcript mistakes while using them to your advantage constitutes fraud on the court.
Prompt 7.17: Create AI Usage Log Template
Create a standardized AI Usage Log template for tracking all AI-assisted work in litigation matters. The log should capture: 1. Matter Information (case name, matter number, client) 2. AI Tool Used (name, version, platform, date used) 3. Purpose (research, analysis, document review, other) 4. Input Summary (describe what was provided to the AI, without reproducing privileged content) 5. Output Type (citations, analysis, document review results, other) 6. Verification Performed (citations checked against Westlaw/LexisNexis/Clio, analysis reviewed by senior attorney, fact-checked against court documents) 7. Outcome (results incorporated into work product, results rejected as unreliable, flagged for further review) 8. Responsible Attorney (who reviewed and approved the output) 9. Privilege/Confidentiality Note (whether the analysis is protected by attorney-client privilege) Format as an Excel workbook with a separate row for each AI use. Include instructions for completion and a summary section for monthly review.

Competence Requirements and Technology Literacy

The ABA's standing committee on ethics has confirmed that technology competence is now a baseline requirement for lawyers. This includes understanding AI capabilities, limitations, risks, and appropriate use cases. Staying informed is an ongoing obligation, not a one-time training.

Competence Development: Attend CLE courses on AI in law; read bar association ethics opinions and guidance; monitor vendor release notes and updates to AI tools in use; join practice-area listservs discussing AI applications; experiment with tools in low-stakes contexts to understand their strengths and weaknesses; engage with ethics counsel or consultants when novel AI applications are proposed.

⚡ The Situation

This prompt addresses conflicts when a witness is deposed while under the influence of medication or mental health conditions that may impair testimony reliability.

⚖ Advocacy Principle
Deposing witnesses while impaired (medications, mental illness) raises credibility questions but is not grounds for stopping the deposition unless the witness is unable to understand questions—NITA methodology teaches that the transcript itself will reflect any impairment through testimony coherence.
Prompt 7.18: Develop Firm-Wide AI Competence Program
Design a comprehensive AI competence development program for our litigation practice. The program should include: 1. Mandatory Training (initial and annual): topics such as AI capabilities and limitations, hallucination risks, ethics rules governing AI use, data security requirements, prompt engineering best practices 2. Specialized Tracks by Role (e.g., senior attorneys reviewing AI output, paralegals using AI for research, practice leaders evaluating new tools) 3. Vendor Training: opportunities to learn new tools as they are adopted by the firm 4. Hands-On Workshops: supervised practice with AI tools, debate over output quality, verification protocols 5. Ethics Discussion Series: quarterly discussions of relevant bar opinions, recent sanctions or ethics violations involving AI, emerging AI risks 6. Competence Verification: end-of-year assessment of technology competence for purposes of license renewal and malpractice insurance 7. Resource Library: collection of ABA opinions, state bar guidance, law review articles, and case studies on AI in litigation Outline the implementation timeline, responsible parties, and success metrics for the program.

Court-Specific AI Rules and Local Practices

An increasing number of courts are adopting local rules governing AI use in litigation. Some courts require disclosure of AI-generated content in filings; others have established restrictions on certain tools. Judges vary in their receptiveness to AI; understanding the specific court's stance is essential.

Research Requirements: Before filing, review local rules of the specific court for any provisions addressing AI; research published opinions by judges in that court discussing AI reliability; consult with local counsel or other attorneys familiar with the court's expectations; consider reaching out to opposing counsel informally to discuss AI use norms; file a protective disclosure statement if uncertain.

⚡ The Situation

This prompt covers the ethics of obtaining deposition testimony from a minor without parental consent or court authorization.

⚖ Advocacy Principle
Deposing minors without parental presence or court order may violate state laws protecting children and create discovery sanctions—the foundational commandments advises that written consent from parents or legal guardians should precede any minor deposition, documented in the file.
Prompt 7.19: Research Court-Specific AI Requirements and Standing Orders
Compile a comprehensive guide to AI rules and expectations in the [COURT NAME] for use by our litigation team. Research should identify: 1. Formal Local Rules: any local rules requiring AI disclosure, restricting certain tools, or regulating AI use in specific contexts (discovery, briefs, expert reports) 2. Standing Orders: any standing orders issued by judges in that court addressing technology use or AI 3. Published Opinions: any opinions by judges in that court discussing AI reliability, evidentiary admissibility of AI output, or ethics concerns 4. Practitioner Norms: unwritten expectations from attorneys familiar with the court regarding AI transparency 5. Prior Motions and Orders: any prior cases in that court involving AI-related disputes (e.g., sanctions for AI misuse, admissibility disputes) 6. Court Administrator Guidance: any guidance from the court clerk or court administration office regarding technology use Format as a reference guide for attorneys, organized by topic (disclosure, admissibility, tool restrictions) with specific citations and practice tips.

Chapter 7 Complete | Ethics, Security & AI Best Practices for Litigation
19 production-ready prompts for AI governance in legal practice
Last updated: March 2026