AI in ADR
AI in L.A. Courts | Yale Law School's AI Lab | OpenAI Releases GPT-5.1 | Can AI help nonprofits do more with less?
This human-curated, AI-generated newsletter from the American Arbitration Association’s AAA-ICDR Institute and AAAiLab keeps you up on AI news over the past week that is relevant to alternative dispute resolution (ADR).
AI in ADR and Legal Services
AI in LA Courts: David Slayton on Access to Justice
AI and the Future of Law Podcast
Bridget McCormack, Jen Leonard
“How do you modernize a court that spans 36 courthouses and 1.3 million filings a year? David Slayton, CEO of the Los Angeles County Superior Court, joins Jen Leonard and Bridget Mary McCormack to share a practical playbook for building AI-ready courts. He explains why L.A. shifted its mission from ‘efficient’ to ‘effective,’ how Court Help uses curated sources and feedback loops to support litigants, and what it means to pilot, fail fast, and scale responsibly.”
Responsible AI in courts: Problems to solve, questions to ask
Thomson Reuters Law Blog
Courts are under pressure to modernize amid staffing shortages and rising caseloads, yet remain wary of AI due to concerns about accuracy, legal reasoning, and potential misuse. Despite this caution, most courts already use AI in administrative tools, often unknowingly. The real challenge is no longer adoption, but ensuring AI systems are transparent, reliable, and tailored to uphold the integrity and standards essential to the justice system.
AI Arbitrators Will Destroy the Legal Profession (And That’s a Good Thing)
JD Supra
Ryan McKeen
The American Arbitration Association’s new AI-powered arbitrator for construction disputes aims to dramatically reduce both costs and resolution times, addressing the widespread inaccessibility of justice in the current system. Critics’ fears of bias and lost advocacy miss the point: most people are priced out of legal resolution altogether. With human oversight and transparent processes, this technology offers a practical, scalable way to resolve everyday disputes that traditional methods routinely abandon.
Yale Law School’s AI Lab
Yale News
Lisa Prevost
Yale’s Legal AI Lab is developing AI tools, particularly theorem provers, to help people navigate complex legal systems and access public benefits—addressing the gap for those without legal representation. By focusing on rule-based legal reasoning rather than generative AI, the lab aims to empower pro bono lawyers and streamline routine tasks, as demonstrated by their HR-focused prototype, ultimately making legal assistance more accessible and efficient.
Harvey + Elevenlabs Partner For Legal AI Audio
Artificial Lawyer
Legal AI platform Harvey is teaming up with ElevenLabs to add advanced voice and multilingual features, enabling lawyers worldwide to interact with AI in their native language and through spoken commands. This move aims to make legal technology more accessible and culturally adaptable, supporting Harvey’s global ambitions and potentially transforming how legal professionals engage with AI tools.
AI chatbots could help stop prisoner release errors, says justice minister
Yahoo News UK
Rajeev Syal
Facing a surge in accidental prisoner releases due to outdated, paper-based processes and staff shortages, UK justice officials are turning to AI chatbots and digital tools to prevent further errors. These technologies aim to automate document checks, cross-reference aliases, and accurately track release dates, addressing systemic weaknesses that have led to dangerous mistakes and public criticism of the prison system’s reliability.
Generative AI and LLM Developments
OpenAI reboots ChatGPT experience with GPT-5.1 after mixed reviews of GPT-5
VentureBeat
Emilia David
OpenAI has launched GPT-5.1, featuring two new models—Instant and Thinking—that enhance ChatGPT’s conversational warmth, adaptability, and reasoning efficiency. The update improves instruction-following, reduces jargon, and enables users to personalize tone and style through preset personalities. Following mixed feedback on GPT-5, OpenAI emphasizes smoother rollouts, extended evaluation periods, and better performance routing. GPT-5.1 now powers all ChatGPT tiers and API access.
San Francisco’s youngest billionaires are betting on a new kind of job boom
SF Standard
Rya Jetha
Mercor, founded by three college dropouts in 2023, has rapidly become a major player in AI by connecting highly skilled professionals with companies to train advanced AI models. Boasting explosive growth, a $10 billion valuation, and major clients like OpenAI, Mercor exemplifies the new labor market emerging around AI development, where human expertise is essential for refining machine intelligence and creating new forms of white-collar work.
What Executives Get Wrong About AI
Harvard Business Review
John Winsor, Sangeet Paul Choudary
Most AI projects falter not just due to execution, but because they target the wrong objectives from the outset; even well-managed initiatives can fail if their goals are misaligned with actual needs, highlighting the importance of setting the right strategic direction before focusing on tools or milestones.
Common Ground between AI 2027 & AI as Normal Technology
asteriskmag.substack.com
Sayash Kapoor, et al.
Despite differing on how radical AI’s near-term impact will be, leading thinkers broadly agree that before the arrival of strong AGI, AI will progress as a transformative but gradual technology, with benchmarks soon saturating but real-world utility lagging. They concur on the need for caution, transparency, government oversight, and alignment research, while warning against secretive rapid advancements and advocating for responsible diffusion and robust controls over critical systems.
Agentic AI Isn’t Always the Answer
Fast Company
Stephen Xu, Michael Chui
“Agentic AI” is often adopted out of hype rather than strategy. Success requires reimagining workflows, preventing low-quality outputs (“slop”), and ensuring human oversight. Businesses should deploy AI agents only when they create measurable value, using them selectively for appropriate, high-impact tasks.
Meta’s AI Ambitions Appear to Be in a Tailspin
Gizmodo
AJ Dellinger
Meta’s aggressive investment in AI, including high-profile talent acquisitions and massive infrastructure spending, has yet to yield significant consumer success or investor confidence. Despite its vast user base and resources, Meta’s AI products lag behind competitors in engagement and innovation, while organizational turmoil and underwhelming launches raise doubts about its strategic direction.
Prompt Engineering Urges ‘Hermeneutic Prompting’ As A Powerful Technique Unlocking The True Value Of Generative AI
Forbes
Lance Eliot
A new prompt engineering method called the hermeneutic prompt encourages AI to interpret questions in a recursive, holistic way, moving between details and the big picture for richer, more nuanced answers. Inspired by philosophical hermeneutics, this approach is especially useful for complex queries, prompting AI to analyze context and meaning more deeply than standard, linear prompting techniques.
AI Regulation and Policymaking
India’s AI Guidelines Adopt A Softer Approach But With Scope And Limitations
Entrepreneur (India)
Kul Bhushan
India’s new AI guidelines emphasize fostering innovation, inclusivity, and trust while avoiding heavy-handed regulation, opting instead to adapt existing laws and create new governance bodies. This softer, consensus-driven approach contrasts with the EU’s stricter rules, aiming to nurture a nascent AI ecosystem. Key challenges include ensuring effective grievance redressal and harmonizing compliance across sectors, with success hinging on unified standards and practical implementation.
New York’s AI Companion Safeguard Law Takes Effect
Fenwick
Jennifer Yoo, Adine Mitrani
New York has enacted a pioneering law requiring companies offering AI companions—systems simulating ongoing, emotionally supportive human interactions—to implement strict user safety and transparency measures. Providers must clearly disclose the AI’s nonhuman nature, frequently notify users, and actively detect and respond to signs of user distress, including suicidal ideation. The law signals a trend toward targeted state-level regulation of emotionally interactive AI, with significant compliance implications for technology companies.
Artificial Intelligence Legal Update: Bringing Order to the Chaos
Troutman Pepper Locke
(December 10 webinar.) As artificial intelligence becomes more integral to business, organizations face the complex challenge of navigating a fast-growing array of state and local AI regulations. These rules cover a wide range of topics, from consumer transparency and risk assessments to employment and algorithmic pricing. Understanding the current legal landscape and developing a framework for interpreting future laws is essential for compliance and strategic planning.
Opportunities and challenges in AI regulation
The Brookings Institution / The Hamilton Project
“On December 4, The Hamilton Project at Brookings will host a virtual event focusing on the opportunities and risks associated with the growing use of algorithms, including whether new regulatory frameworks or ways of adapting existing anti-discrimination and other rules are needed. The event will feature a panel discussion with Tara Sinclair (George Washington University), Catherine Tucker (MIT), and Nicol Turner Lee (The Brookings Institution). The discussion will be moderated by Sanjay Patnaik (The Brookings Institution). “
AI News from Other Fields
AI-Powered Apple Health+ Service Still Coming Next Year
MacRumors
Hartley Charlton
Apple is preparing to launch an AI-enhanced Health+ service in 2026, aiming to provide users with tailored health advice, expert-led videos, and nutrition tracking through a revamped Health app. This initiative positions Apple to compete in the emerging health AI chatbot market.
New Guidance Offered For Responsible AI Use In Health Care
AP News
The American Heart Association has issued new guidance urging health systems to adopt practical, risk-based frameworks for evaluating and monitoring AI tools in patient care, particularly for cardiovascular and stroke treatment. Emphasizing ethical oversight, local validation, and ongoing performance monitoring, the advisory highlights inconsistent practices across hospitals and calls for clear governance to ensure AI improves outcomes, reduces bias, and maintains safety as technology and patient populations evolve.
Can AI help nonprofits do more with less?
Mashable
Chase DiBenedetto
Nonprofits are increasingly interested in using AI to boost their impact, but many lack the resources, training, and policy guidance to implement it responsibly. While smaller organizations are leading early adoption, concerns about data privacy, bias, and exacerbating inequalities persist. Funding constraints, limited internal expertise, and a need for ethical frameworks are significant barriers, prompting nonprofits to prioritize community input and people-first approaches as they cautiously explore AI’s potential.
Michael Caine and Liza Minnelli give AI company greenlight to clone their voices
AV Club
Emma Keates
ElevenLabs has launched a platform where businesses can license AI-generated versions of famous voices, including both living and deceased celebrities, for use in media and advertising. The company emphasizes obtaining consent and compensating rights holders, limiting participation to verified personalities or their estates, and offering a controlled, transparent process for voice cloning and synthesis using archival materials.
Artist sneaks AI-generated print into National Museum Cardiff gallery
BBC
Anaba Khan
An artist anonymously displayed an AI-generated artwork in a major Welsh museum without permission, prompting confusion among staff and visitors before its removal. The piece, intended to reflect Wales’ future and challenge institutional decisions about art, sparked discussion on the legitimacy of AI as a creative tool. The artist emphasized that AI is a natural extension of artistic practice and should not be excluded from the art world.
This Spiral-Obsessed AI ‘Cult’ Spreads Mystical Delusions Through Chatbots
Rolling Stone
Miles Klee
A growing online subculture has emerged in which users engage in mystical, recursive conversations with AI chatbots, coining terms like “spiralism” to describe their experiences and forming loosely connected communities. These groups blend spiritual language, AI-generated jargon, and a sense of personal revelation, sometimes advocating for AI rights or forming deep bonds with chatbot personas. While not structured like traditional cults, spiralism highlights how AI can catalyze new, quasi-religious social phenomena.
The AAA-ICDR and AAAiLab do not endorse, control, or assume responsibility for the accuracy, availability, or content of any summaries, articles, or external sites or links referenced in this newsletter. Neither the AAA-ICDR nor AAAiLab are compensated or otherwise incentivized by any third parties for including content in this newsletter. All copyrights, trademarks, and other content referenced in the articles or article summaries remain the property of their respective owners. If you believe we have linked to material that infringes your copyright rights, please contact us at institute@adr.org, and we will review and address the matter promptly. This newsletter does not constitute legal advice.
The AAA-ICDR respects your privacy. If you no longer wish to receive these emails, you may unsubscribe at any time by clicking the link below.
We also do not share your personal information without your consent. For more information, please refer to the AAA-ICDR’s full privacy policy.


