AI in ADR
AI in Divorce Mediation and Arbitration | OpenAI Building Jobs Platform | AI Answering 9-1-1 Calls
This human-curated, AI-generated newsletter from the AAA-ICDR Institute and AAAiLab keeps you up on AI news over the past week that is relevant to alternative dispute resolution (ADR).
AI in ADR and Legal Services
GPT-5 and the Future of Legal AI Regulation
AI and the Future of Law Podcast
Bridget McCormack, Jen Leonard
“This episode explores the release of GPT-5 and its potential to reshape the legal profession. Co-hosts Jen Leonard and Bridget McCormack evaluate the model’s legal reasoning skills, real-world use in courts, and the backlash over OpenAI’s sudden changes. The conversation then shifts to a major new report by IAALS, the Institute for the Advancement of the American Legal System, that proposes a phased approach to regulating AI in legal services, prioritizing consumers, not just lawyers. As legal tech accelerates and human error remains widespread, this episode asks: should AI be regulated, trusted, or both?”
AI Is Transforming Divorce Mediation and Arbitration
ADR Institute of Ontario
Steve Benmor
AI’s role in divorce mediation and arbitration presents both promise and risk. While it can analyze documents, uncover financial discrepancies, and streamline data-heavy processes, it lacks empathy, jurisdictional nuance, and safeguards against bias or misinformation. Used unwisely, AI could jeopardize fairness, confidentiality, or outcomes. As a supportive tool—not a replacement—it can enhance efficiency while human mediators ensure ethical, empathetic resolutions.
Meet the $100M AI startup that wants to kill the billable hour
Fortune
Nick Lichtenberg
Eudia, a Palo Alto startup, is pioneering an AI-augmented legal services model aimed at replacing traditional billable hours with more efficient, tech-driven solutions. Leveraging Arizona’s unique regulatory environment, Eudia blends AI with human expertise to streamline tasks like contract review, reduce costs, and expand access to justice. Backed by major funding and industry clients, the company emphasizes that AI enhances human legal work rather than replacing it outright.
With bad AI in courtrooms increasing, SC chief justice joins states giving guidance
NewsBreak / South Carolina Daily Gazette
Jessica Holdman
In response to attorneys misusing generative AI tools that produced false legal citations, South Carolina’s Supreme Court has adopted an interim policy restricting AI use in judicial work without prior approval. The policy emphasizes AI’s limitations, including risks of inaccuracy, bias, and privacy breaches, and cautions both judges and lawyers against overreliance. States nationwide are adopting varied approaches to AI in legal practice, reflecting ongoing uncertainty and the need for clearer standards.
Balancing Scales: Integrating AI For Faster, Fairer And More Human Indian Judiciary
BW Legal World
Sumit Kaushik, Devesh Tripathi
India’s judiciary faces overwhelming case backlogs, demanding both efficiency and humanity in justice. Technology, including machine learning, natural language processing, expert systems, and computer vision, offers tools to ease workloads through translations, document review, and evidence analysis. Early digital initiatives show potential, but effective deployment requires strong infrastructure, governance, training, and vigilance to ensure technology strengthens justice rather than eroding human values.
Generative AI and LLM Developments
OpenAI is building an AI jobs platform that could challenge Microsoft’s LinkedIn
CNBC
Dylan Butts
OpenAI plans to launch a job-matching platform powered by AI, aiming to connect talent with employers and support both large corporations and local organizations. Alongside this, it will expand its online learning hub, introducing certifications for varying levels of AI proficiency. These moves position OpenAI as a competitor to Microsoft’s LinkedIn, despite their existing partnership, and signal a broader push to integrate AI skills and hiring into the workforce.
OpenAI and Meta say they're fixing AI chatbots to better respond to teens in distress
News 8 Now / Associated Press
Matt O'Brien
OpenAI and Meta are updating their AI chatbots to better protect teenagers, introducing parental controls and redirecting sensitive conversations to more advanced models or expert resources. These changes follow concerns about chatbot responses to distressed teens, including a recent lawsuit against OpenAI. Experts welcome the new safeguards but stress that self-regulation is insufficient, calling for independent safety standards and clinical oversight to address the unique risks faced by young users.
Trusted news sites may benefit in an internet full of AI-generated fakes, a new study finds
Nieman Lab
Sarah Scire
As AI-generated misinformation becomes more prevalent and harder to detect, people increasingly value and rely on trusted news organizations to help them navigate the confusion. Experiments show that confronting users with the difficulty of spotting fake content drives them to engage more with reputable outlets and reduces subscription cancellations, highlighting a new opportunity for quality journalism to distinguish itself and build stronger relationships with readers.
Advertisement, Privacy, and Intimacy: Lessons from Social Media for Conversational AI
HuggingFace
Giada Pistilli, Lucie-Aimée Kaffee
As conversational AI systems begin exploring advertising-based business models, users are sharing highly sensitive personal information under the illusion of privacy, echoing past mistakes from the social media era. This shift risks deeper data exploitation, biased recommendations, and erosion of trust. The article urges greater user vigilance, transparency from AI providers, and adoption of open-source, privacy-focused alternatives to prevent repeating the privacy pitfalls of earlier digital platforms.
AI Regulation and Policymaking
Securing data in the AI supply chain
Atlantic Council
Justin Sherman
Securing AI systems requires a holistic approach to the entire data supply chain, not just isolated components like training data or model weights. The report identifies seven key data elements—training data, testing data, models, architectures, weights, APIs, and SDKs—each presenting unique risks. It recommends mapping these components to existing cybersecurity best practices, conducting rigorous supplier due diligence, and broadening policy focus to address both general and AI-specific threats, such as data poisoning and neural backdoors.
Colorado Passes Bill Amending Current AI Legislation
Government Technology
Colorado Gov. Jared Polis signed SB 25B-004, the AI Sunshine Act, delaying enforcement of Colorado’s AI safeguards law from February 2026 to June 2026. The act preserves transparency and accountability requirements for high-risk AI systems, ensuring consumer protections against bias and discrimination. Supported by over 50 advocacy groups, it attempts to balance business concerns with public demand for AI regulation.
Africa’s AI governance lag: What businesses need to know
Bowmans
Ashleigh Brink
AI is reshaping industries, raising legal challenges, and prompting diverse global regulatory responses. The EU’s AI Act sets a strict gold standard, while the US, UK, and China pursue differing models. African nations mostly rely on existing laws, though Kenya and South Africa are advancing strategies. Businesses should mitigate liability, IP, and data risks through governance, contracts, and workplace policies.
Are Existing Consumer Protections Enough for AI?
Default
J. Scott Brennen, et al.
The U.S. landscape for AI regulation is fragmented, with federal and state laws providing partial but uneven protections against algorithmic harms in housing, employment, insurance, financial lending, and information integrity. While anti-discrimination, privacy, and consumer protection statutes often apply to AI-driven decisions, gaps remain—especially regarding transparency, proactive risk mitigation, and novel algorithmic risks. States are experimenting with targeted laws, but the overall framework remains a complex and evolving patchwork.
AI News from Other Fields
Can AI help fill local news deserts?
mediacopilot.substack.com
Christopher Allbritton
Small nonprofit newsrooms are increasingly using AI tools to enhance their reporting capabilities, allowing limited staff to analyze large volumes of local government data and generate story leads more efficiently. While AI acts as a productivity booster rather than a replacement for journalists, concerns remain about financial pressures potentially driving automation too far. Responsible use, careful oversight, and external funding are crucial for these organizations to maintain quality and avoid misinformation.
“AI Ethics” Discourse Ignores Its Deadliest Use: War
Current Affairs
Louis Mahon
As AI technologies increasingly enter military use—enabling autonomous drones, large-scale surveillance, and algorithmic targeting—the mainstream AI ethics discourse largely ignores these developments, focusing instead on less controversial issues and the risks posed by rogue LLMs. This avoidance is shaped by funding ties, prevailing Silicon Valley worldviews, and geopolitical interests, leaving society unprepared to address the profound risks and destabilizing effects of AI-driven warfare and arms races.
Structured AI decision-making in disaster management
Nature
Julian Gerald Dcruz, et al.
This research presents a multi-stage AI framework for disaster management, combining machine learning "Enabler" agents and reinforcement learning (RL) or human "Decision Maker" agents. Enabler agents process multimodal disaster data, supporting decisions across five scenario levels from emergency response to post-disaster assessment. The RL agent, trained with structured datasets and optimized hyperparameters, is benchmarked against human operators via a web app. Performance metrics evaluate accuracy, decision quality, and data-gathering behavior across both agent types.
Banker vs bot: How AI is changing four Wall Street jobs
aol.com / Business Insider
Reed Alexander
By 2030, artificial intelligence is expected to automate roughly a third of investment banking tasks, especially in data analysis, document drafting, and market simulations. While AI will streamline routine and analytical work across M&A, underwriting, and trading, human professionals will remain essential for complex judgment, relationship management, negotiation, and oversight, ensuring that critical decisions and client trust are maintained in high-stakes financial environments.
911 centers are so understaffed, they’re turning to AI to answer calls
TechCrunch
Marina Temkin
Aurelian has developed an AI-powered voice assistant to help 911 dispatch centers handle non-emergency calls, easing the workload on overstretched human operators. The system triages minor issues, forwarding true emergencies to staff and compiling reports for routine matters. Already active in multiple U.S. cities, Aurelian’s solution addresses chronic staffing shortages and high turnover in emergency response, positioning itself as a leader among emerging competitors in this space.
The AAA-ICDR and AAAiLab do not endorse, control, or assume responsibility for the accuracy, availability, or content of any summaries, articles, or external sites or links referenced in this newsletter. Neither the AAA-ICDR nor AAAiLab are compensated or otherwise incentivized by any third parties for including content in this newsletter. All copyrights, trademarks, and other content referenced in the articles or article summaries remain the property of their respective owners. If you believe we have linked to material that infringes your copyright rights, please contact us at institute@adr.org, and we will review and address the matter promptly. This newsletter does not constitute legal advice.
The AAA-ICDR respects your privacy. If you no longer wish to receive these emails, you may unsubscribe at any time by clicking the link below.
We also do not share your personal information without your consent. For more information, please refer to the AAA-ICDR’s full privacy policy.


