This human-curated, AI-generated newsletter from the AAA-ICDR Institute and AAAiLab keeps you up on AI news over the past week that is relevant to alternative dispute resolution (ADR).
AI in ADR and Legal Services
PLI and AAA Release Innovative AI Course for Law Firm Leaders
Practising Law Institute
Practising Law Institute and the American Arbitration Association have launched the course Building a Law Firm AI Strategy, which was developed in collaboration with Creative Lawyers. Aimed at law firm leaders, the six-module, self-paced training offers frameworks to implement generative AI, overcome organizational resistance, and balance innovation with long-term viability. Featuring the AAA’s AI journey as a case study, it equips firms to meet evolving client needs and lead strategic AI adoption.
Reed Smith International Arbitration Focus: AI
Reed Smith
Peter Rosher, et al.
Artificial intelligence is increasingly being integrated into international arbitration, with major law firms and institutions investing in AI to enhance dispute resolution. In this report, diverse experts contribute their perspectives on both the opportunities and challenges AI brings to the field, highlighting a period of rapid evolution and collaboration as the technology’s future role in arbitration continues to unfold.
Legal AI Platform Harvey To Get LexisNexis Content and Tech In New Partnership Between the Companies
LawSites
Bob Ambrogi
Harvey is partnering with LexisNexis to blend LexisNexis’ AI tools, legal databases, and citation validation into Harvey’s platform, allowing users to receive reliable, citation-backed legal answers and automate key legal workflows. This collaboration aims to streamline tasks like drafting motions, leveraging LexisNexis’ trusted legal content and AI expertise, and signals Harvey’s continued rapid growth and ambition in the legal tech sector.
AI-Generated Deepfakes in Court: An Emerging Threat to Evidence Authenticity?
JD Supra / Womble Bond Dickinson
U.S. courts are grappling with how to address AI-generated deepfakes as potential evidence. While current rules require proof that evidence is authentic, a proposed amendment would add steps for challenging suspected AI fabrications, requiring challengers to provide credible reasons and proponents to meet a higher standard of proof. For now, the committee decided not to adopt the amendment, citing existing safeguards and few deepfake cases, but left it open for future consideration.
English High Court highlights that lawyers using AI must ensure accuracy
Pinsent Masons
A recent English High Court ruling emphasized that lawyers remain fully responsible for verifying the accuracy of legal documents, especially when using AI tools like ChatGPT, which can produce convincing but unreliable results. The court warned that unverified AI-generated citations threaten the integrity of legal proceedings and public trust, urging law firms to establish clear AI usage policies and training, and signaling that future misuse could result in serious professional consequences.
Generative AI and LLM Developments
Some thoughts on Generative AI
Amazon.com
Andy Jassy
Embracing AI through continuous learning, experimentation, and collaboration is essential for adapting to rapid technological change, driving innovation, and maximizing impact within organizations. Those who actively engage with AI tools and strategies will be best equipped to contribute meaningfully and help transform both customer experiences and company operations.
Sam Altman Says GPT-5 Coming This Summer, Open to Ads on ChatGPT—With a Catch
ADWEEK
Trishla Ostwal
OpenAI plans to launch GPT-5 this summer, aiming to surpass its previous AI model and maintain its lead amid intensifying competition. CEO Sam Altman discussed potential new revenue strategies, including ads, but stressed that user trust and privacy must not be compromised. The company also faces legal pressure to retain user data due to ongoing litigation, highlighting the complex balance between innovation, monetization, and user privacy in the evolving AI landscape.
ChatGPT just got way better at search—and Google should be worried
Tom's Guide
Amanda Caswell
OpenAI’s revamped ChatGPT Search now handles complex, multi-part questions, interprets images, and delivers more personalized, context-rich answers, directly challenging Google’s dominance in search. Unlike Google’s fact-focused, ad-driven approach, ChatGPT offers interactive, conversational responses and can recall user preferences. These upgrades position ChatGPT as a versatile research assistant, especially for brainstorming, deep dives, and visual analysis, signaling a significant shift in how people may seek information online.
Rates Of Hallucination In AI Models From Google, OpenAI On The Rise
aol.com
Madison Troyer
Recent advancements in generative AI models from Google and OpenAI have led to an increase in “hallucinations.” This trend not only undermines trust in AI-generated summaries but also reduces traffic to more reliable sources, with click-through rates to accurate articles dropping significantly. Experts are concerned that these persistent errors threaten the usefulness and credibility of AI tools in information retrieval.
This AI Model Never Stops Learning
WIRED
Will Knight
MIT researchers have developed SEAL, a method enabling large language models to continually refine themselves by generating and learning from their own synthetic data. This approach allows AI to adapt to new information and user preferences over time, moving closer to human-like, lifelong learning. While SEAL faces challenges like “catastrophic forgetting” and high computational demands, it represents a promising advance toward more adaptive and personalized AI systems.
MIT brain scans suggest that using GenAI tools reduces cognitive activity
TechSpot
Daniel Sims
Research from MIT reveals that frequent use of generative AI tools for writing weakens neural connectivity, reduces memory retention, and leads to less original work compared to writing unaided or with search engines. While AI-assisted essays score well, users engage less deeply with material and struggle to recall content. These effects raise concerns about cognitive development, especially as AI becomes embedded in educational practices for both students and educators.
Institutional Books 1.0: A 242B token dataset from Harvard Library's collections, refined for accuracy and usability
arXiv.org
Matteo Cargnelutti
The Institutional Books 1.0 dataset, produced by the Harvard Library and partners, comprises 242 billion tokens from 983,004 public domain books originally digitized via the Google Books project. This refined dataset—drawn from over 1 million volumes in 250+ languages—features OCR-extracted text, post-processed versions, and comprehensive metadata. It emphasizes provenance, quality, and usability for machine learning, with extensive analysis covering topics, language, deduplication, and OCR artifacts. Released under a noncommercial license, the project promotes sustainable, transparent data stewardship and seeks to foster an institutional commons for training large language models across academia and industry.
Perplexity Rejects BBC’s Claims Over AI News Content Reuse
PYMNTS.com
The BBC has formally accused AI search engine Perplexity of using its news content without permission to train AI, demanding an immediate halt to content scraping, deletion of used material, and financial compensation. This marks the BBC’s first legal action in such disputes, intensifying broader clashes between publishers and AI firms over copyright, accuracy, and trust. Perplexity, meanwhile, rejects the claims. It faces similar challenges from other media organizations.
AI Regulation and Policymaking
Expert Survey: AI Reliability & Security Research Priorities
Institute for AI Policy and Strategy
Joe O’Brien
A survey of AI reliability and security experts identifies robust risk monitoring, dangerous capability evaluations, and multi-agent system oversight as top research priorities for safe AI development. While some areas like access control and supply chain integrity are crucial but difficult, many promising directions are both important and feasible. The findings highlight where immediate funding can have the greatest impact and offer guidance for strategic investment in AI safety.
LLM and Generative AI for Sensitive Data - Navigating Security, Responsibility, and Pitfalls in Highly Regulated Industries
InfoQ
Stefania Chaplin, Azhir Mahmood
Generative AI is rapidly transforming industries from science to law, but its adoption in highly regulated sectors demands careful handling of sensitive data, adherence to evolving global regulations, and robust frameworks for responsibility, security, and explainability. Effective AI implementation requires cross-functional MLOps practices, continuous testing, and transparency. Balancing model performance with interpretability is crucial, especially where compliance and trust are paramount. Ongoing vigilance against vulnerabilities and ethical risks remains essential as AI becomes more pervasive.
Lucinity and PwC Collaborate to Simplify AI Integration for Compliance Teams
GlobeNewswire / Lucinity / PwC Denmark
Lucinity and PwC Denmark are partnering to make AI adoption easier for financial institutions tackling financial crime. Their joint approach automates compliance tasks, improves investigation speed, and ensures regulatory transparency, while allowing banks to tailor solutions to their needs. PwC supports smooth integration and staff training, and Lucinity’s platform enables custom AI-powered workflows, helping institutions keep up with growing regulatory demands and operational complexity.
‘A catastrophic risk to humanity’: New York is pushing back against AI
Dazed
Thom Waite
New York’s RAISE Act seeks to impose new obligations on leading AI companies, requiring them to develop and publicly share safety plans and promptly report major security incidents. The law targets firms using significant computing resources to train advanced AI, aiming to prevent catastrophic harms such as large-scale financial losses, mass casualties, or existential threats. The initiative reflects mounting concerns over the risks posed by rapidly advancing AI technologies.
Newsom’s new AI report could shape legislation
Palo Alto Online / CalMatters
Khari Johnson
California is weighing new AI regulations that prioritize transparency, such as whistleblower protections and third-party audits, in response to rapid advances and emerging risks with powerful AI models. Recent findings highlight AI’s growing autonomy and potential for misuse, prompting lawmakers to consider bills on content labeling, chatbot protocols, and legal accountability, while aiming to balance innovation with public safety and coordinate with other governments to minimize regulatory burdens.
The AAA-ICDR and AAAiLab do not endorse, control, or assume responsibility for the accuracy, availability, or content of any summaries, articles, or external sites or links referenced in this newsletter. Neither the AAA-ICDR nor AAAiLab are compensated or otherwise incentivized by any third parties for including content in this newsletter. All copyrights, trademarks, and other content referenced in the articles or article summaries remain the property of their respective owners. If you believe we have linked to material that infringes your copyright rights, please contact us at institute@adr.org, and we will review and address the matter promptly. This newsletter does not constitute legal advice.
The AAA-ICDR respects your privacy. If you no longer wish to receive these emails, you may unsubscribe at any time by clicking the link below.
We also do not share your personal information without your consent. For more information, please refer to the AAA-ICDR’s full privacy policy.