AI in ADR
GPT-5 Drops | Harvey Reaches $100M ARR | AAA-ICDR "Future Dispute Resolution" Event 10/9-10/10
This human-curated, AI-generated newsletter from the AAA-ICDR Institute and AAAiLab keeps you up on AI news over the past week that is relevant to alternative dispute resolution (ADR).
AI in ADR and Legal Services
In collaboration with the International Institute for Conflict Prevention & Resolution and Wolters Kluwer, the third AAA-ICDR Future Dispute Resolution Conference will take place in New York on October 9, 2025 to examine how artificial intelligence is reshaping alternative dispute resolution. The conference will feature industry leaders discussing AI’s impact, regulatory considerations, and ethical challenges. A follow-up hackathon on October 10 will bring together legal and tech professionals to develop innovative ADR solutions, highlighting the sector’s shift toward digital transformation and collaboration.
AI and Arbitration: A perspective from France
Clyde & Co.
Dilara Khamitova
France is integrating AI into arbitration to boost efficiency, using advanced legal tech for research, document management, and compliance, while firmly maintaining ethical and legal safeguards. French and EU regulations restrict AI to a supportive role, ensuring that decision-making remains human-led. Courts and regulators emphasize transparency, data privacy, and human oversight, reflecting a consensus that AI should enhance but never replace human judgment in legal processes.
Legal AI startup Harvey hits $100 million in annual recurring revenue
CNBC
Ashley Capoot
Harvey, a startup offering AI-driven legal tools, has rapidly achieved $100 million in annual recurring revenue within three years, fueled by strong adoption from major corporations and law firms. Its platform streamlines legal research, document drafting, and due diligence, with customer usage expanding quickly after initial adoption. Founded by a former lawyer and an ex-AI researcher, Harvey leverages advanced language models to tailor solutions for the legal sector.
TR Launches ‘CoCounsel Legal’ With Agents + Deep Research
Artificial Lawyer
Thomson Reuters has introduced CoCounsel Legal, a unified AI platform that integrates advanced research tools, proprietary legal content, and agentic AI workflows to streamline legal tasks. Key innovations include Deep Research, which enables multi-step, transparent legal research, and new guided workflows that automate complex legal processes. This consolidation leverages Thomson Reuters’ extensive legal data, aiming to redefine how legal professionals conduct research and execute tasks within a single environment.
Can You Trust AI With Legal Secrets? A Closer Look At Confidentiality, Privilege And Risk
BW Legal World
Attreyi Mukherjee
AI tools like ChatGPT lack the legal confidentiality protections that human lawyers provide, meaning information shared with them can be disclosed in court or to authorities. While AI can enhance legal productivity, it cannot replace the ethical and professional obligations of licensed advocates. Users should avoid sharing sensitive or case-specific details with AI and verify any legal advice with qualified professionals to safeguard their interests.
Case Radar: Making Legal Knowledge Accessible to All Through AI
onesafe.io
Case Radar, launched in 2024, is an AI-driven legal platform designed to make Nigerian legal information readily accessible and affordable for both professionals and the public. By automating legal research and analysis, it streamlines legal services, though users remain cautious about AI accuracy. The platform aims to empower individuals, support legal employment, and plans to expand across Africa, while emphasizing ethical standards and the importance of human oversight.
Generative AI and LLM Developments
GPT-5 and the new era of work
OpenAI
GPT-5: It Just Does Stuff
One Useful Thing
Ethan Mollick
GPT-5 Hands-On: Welcome to the Stone Age
latent.space
OpenAI Finally Launched GPT-5. Here's Everything You Need to Know
WIRED
Kylie Robison
GPT-5 is here. Now what?
MIT Technology Review
Grace Huckins
OpenAI says latest ChatGPT upgrade is big step forward but still can’t do humans’ jobs
The Guardian
Dan Milmo
GPT-5 represents a leap in AI usability, blending initiative with refined task execution. It autonomously tackles complex problems, routing queries to either a reasoning or non-reasoning mode, and excels in coding and tool integration. While its behavior reflects advances, its writing arguably lags behind models like GPT-4.5. GPT-5 enhances speed, context handling, and interface personalization, offering flexible access tiers and Google integration. Despite improved safety and reliability, it doesn’t cross into full AGI, nor does it match human adaptability. Nonetheless, it sets a new standard for AI productivity, intuitiveness, and automation across applications.
OpenAI Wants To Give Federal Agencies Access To ChatGPT, Offers AI Platform To U.S Government For $1 A Year
NewsBreak
Ariela Anís
OpenAI is making ChatGPT Enterprise available to all U.S. federal agencies for just $1 per year, aiming to streamline government operations, reduce administrative burdens, and enhance public service efficiency. This initiative follows a $200 million Department of Defense contract and successful state-level pilots, and includes robust security measures, specialized training, and partnerships to ensure responsible AI adoption across the federal workforce.
gpt-oss: OpenAI validates the open ecosystem (finally)
interconnects.ai
Nathan Lambert
OpenAI has released two powerful, openly licensed text-only reasoning models, marking its first major open model release in years and signaling a strategic shift that could disrupt the API market. These models, optimized for efficiency and enterprise use, offer near state-of-the-art performance and permissive usage, but lack base model releases and full transparency. The move boosts Western open AI competitiveness but leaves questions about openness, adoption speed, and future research flexibility.
AI Regulation and Policymaking
States take the lead in AI regulation as federal government steers clear
Ars Technica
Anjana Susarla
With federal inaction on AI oversight, state legislatures are stepping in to regulate key areas such as government use, health care, facial recognition, and generative AI. States are enacting laws to boost transparency, require risk disclosures, and mandate risk management practices for AI systems, particularly in critical public functions. New oversight bodies and frameworks are emerging at the state level to address potential biases and harms from AI deployment.
Anti-AI Explained: Why Resistance to AI Is Growing
Built In
Jeff Rumage
Mounting fears over AI’s societal impacts—ranging from job losses and copyright infringement to existential threats—have fueled a diverse anti-AI movement. Artists, activists, and researchers are protesting, suing, and developing technical defenses against AI misuse, while pushing for stronger regulations. Legal actions, industry strikes, and new tools to thwart unauthorized AI training highlight the escalating backlash, even as governments and organizations debate the balance between innovation and oversight.
AI Summit 2025 From Policy to Practice: Legal Strategies for the Age of AI (Event)
Mayer Brown
Legal and industry experts will convene on September 18 to address challenges and strategies related to AI governance, intellectual property, contracting, antitrust, and security, offering guidance on aligning business practices with emerging laws and ethical standards. The half-day program features insights from policymakers and aims to equip legal professionals and executives with practical approaches for navigating the rapidly changing AI regulatory landscape.
Health care is embracing artificial intelligence. Some Pa. lawmakers say guardrails are needed.
WESA
Vincent DiFonzo
Pennsylvania legislators are seeking to introduce bipartisan regulations on the use of AI in health care, emphasizing the need for transparency and the preservation of human decision-making in patient care. While AI offers efficiency gains and is already integrated into clinical workflows, concerns persist about bias and the erosion of the doctor-patient relationship. This initiative aligns with broader national trends as states move to fill regulatory gaps in the absence of comprehensive federal oversight.
An Illinois bill banning AI therapy has been signed into law
Mashable
Chance Townsend
Illinois has enacted the nation’s first law restricting artificial intelligence from independently providing mental health services, requiring that only licensed professionals deliver therapy and review any AI-assisted care. The legislation aims to prevent unqualified AI-driven interventions, close loopholes around unlicensed practitioners, and impose fines for violations, reflecting growing concerns about AI’s risks in sensitive healthcare contexts and prioritizing patient safety and professional oversight.
AI for Good: Leading with ethics, inclusion, and impact
Cisco
Brian Tippens
AI is rapidly becoming a force for social good, with applications ranging from fighting human trafficking to supporting farmers and improving access to clean water. Ensuring this technology benefits everyone requires a strong commitment to ethics, inclusion, and transparency, along with efforts to make AI skills and opportunities widely accessible. Building responsible AI is a shared responsibility, demanding proactive collaboration and a focus on human impact.
AI News from Other Fields
Vogue’s recent publication of a Guess advertisement featuring an AI-generated model has ignited debate over the erosion of human creativity and authenticity in fashion. The subtle disclosure of AI involvement left many feeling misled, intensifying concerns about transparency, ethics, and the potential loss of cultural depth.
Report: Disney’s Attempts to Experiment With Generative AI Have Already Hit Major Hurdles
Gizmodo
James Whitbrook
Disney’s recent attempts to use generative AI in film production have repeatedly stalled due to legal uncertainties, copyright disputes, and fears of public backlash. Efforts to digitally recreate actors and introduce AI characters were abandoned over concerns about data security, intellectual property, and union negotiations. High-profile controversies, lawsuits, and negative publicity have left studios cautious, suggesting that widespread adoption of generative AI in Hollywood is still a distant prospect.
CEO Brian Chesky says Airbnb is going to become an AI-first app with agents that can book trips for you
AOL / Insider
Kelsey Vlamis
Airbnb CEO Brian Chesky is steering the company toward becoming an AI-first platform, emphasizing automation and personalization in customer service. He predicts that leading apps will soon be fundamentally AI-driven, and Airbnb aims to be a comprehensive travel and services hub that can proactively handle bookings and user needs. Recent AI deployments have reduced human intervention and are set to expand, reflecting Chesky’s vision of Airbnb as the “everything app” for travel.
Health Care AI Solved Note-Taking. Fixing the Core Business Will Be Harder
Newsweek
Alexis Kayser
U.S. health systems are moving beyond basic AI tools like ambient scribes to more advanced platforms that unify data and enable predictive modeling, aiming to transform patient care and operational efficiency. However, widespread adoption faces hurdles such as fragmented infrastructure, lack of clinical guidelines, reimbursement challenges, and slow regulatory progress. Experts stress the need for coordinated platforms, rigorous validation, and government leadership to realize AI’s full potential in healthcare.
Rethinking Trust Formation in AI Diagnostics: Contrasting Human-like and Machine-like Perceptions in User Responses (Pre-Print)
Research Square
Xiaochen Liu, et al.
As AI-powered medical chatbots become more prevalent in healthcare, user trust emerges as a central challenge—shaped by whether patients view these systems as mere tools or as human-like agents. This study investigates how internal and external factors, privacy concerns, and user perceptions influence trust in AI medical consultations, integrating multiple theoretical models to clarify the complex dynamics of trust-building in AI-driven healthcare.
AI Consulting Services Market Size to Hit USD 49.11 Billion by 2032 Driven by Enterprise AI Adoption, Custom Strategy, and Regulatory Demand | SNS Insider
GlobeNewswire / SNS Insider
The AI consulting services market is experiencing rapid expansion, driven by businesses across sectors seeking expert help to implement AI for efficiency, compliance, and innovation. Large enterprises and regulated industries like finance are leading adopters, relying on consultants for integration, strategy, and governance. IT consulting dominates, while strategy consulting grows fastest as companies focus on aligning AI with business goals and responsible practices to stay competitive.
tent, signaling a broader challenge in managing AI-driven misinformation on major platforms.
The AAA-ICDR and AAAiLab do not endorse, control, or assume responsibility for the accuracy, availability, or content of any summaries, articles, or external sites or links referenced in this newsletter. Neither the AAA-ICDR nor AAAiLab are compensated or otherwise incentivized by any third parties for including content in this newsletter. All copyrights, trademarks, and other content referenced in the articles or article summaries remain the property of their respective owners. If you believe we have linked to material that infringes your copyright rights, please contact us at institute@adr.org, and we will review and address the matter promptly. This newsletter does not constitute legal advice.
The AAA-ICDR respects your privacy. If you no longer wish to receive these emails, you may unsubscribe at any time by clicking the link below.
We also do not share your personal information without your consent. For more information, please refer to the AAA-ICDR’s full privacy policy.


