This human-curated, AI-generated newsletter from the AAA-ICDR Institute and AAAiLab keeps you up on AI news over the past week that is relevant to alternative dispute resolution (ADR).
AI in ADR and Legal Services
When Machines and Law Collide: The UAE's Innovative AI Law Plan
Futurism
Tanmoy Roy
The United Arab Emirates is pioneering the integration of artificial intelligence into its lawmaking process, aiming to enhance legislative efficiency, accuracy, and responsiveness. While AI will assist with drafting, analysis, and impact simulations, human oversight and ethical standards remain central. This initiative reflects the UAE’s broader commitment to technological innovation and may influence global governance, raising important questions about transparency, accountability, and the balance between automation and human judgment.
State Bar of California admits it used AI to develop exam questions, triggering new furor
Los Angeles Times / NewsBreak
Jenny Jarvie
California's State Bar faces backlash after revealing that some bar exam questions were developed with AI by non-lawyers, raising concerns about question quality, conflict of interest, and transparency. Technical failures and rushed question creation further fueled criticism from legal academics and test takers. Despite calls for reform and greater oversight, the State Bar defends its process and plans to continue using new technologies.
Judges recruit an artificial intelligence copilot
The Times (London)
Jonathan Ames
Judges in England and Wales now have access to Microsoft's Copilot Chat AI tool on their official devices to assist in drafting judgments, but are cautioned about potential pitfalls such as misinformation, bias, and errors. The new guidance emphasizes the importance of understanding both the benefits and the limitations of AI, urging judges to approach these tools with informed caution to maintain accuracy and fairness in legal proceedings.
Generative AI and LLM Developments
Figma's 2025 AI report: Perspectives from designers and developers
Figma
Andrew Hogan
AI adoption in product design and development is accelerating, with smaller companies leading the charge and agentic AI tools gaining traction. While AI boosts efficiency, skepticism remains about its reliability and impact on quality, especially among designers compared to developers. Success hinges on strong design, rapid iteration, and human expertise, as most teams see AI as essential for the future but still grapple with defining clear goals and measuring outcomes.
Microsoft made an ad with generative AI and nobody noticed
The Verge
Dominic Preston
Microsoft quietly released a Surface hardware ad that integrated generative AI for scripting, storyboarding, and select visuals—yet viewers didn’t notice for months. The team blended AI-generated and real footage, correcting errors and leveraging AI for efficiency, reportedly saving significant time and cost. The project demonstrates how AI can augment, rather than replace, creative roles, with its outputs now sophisticated enough to pass undetected in commercial work.
Saying 'please' and 'thank you' to ChatGPT costs millions of dollars, CEO says
USA TODAY
Gabe Hauari
OpenAI CEO Sam Altman revealed that the simple act of users saying "please" and "thank you" to ChatGPT has cost the company tens of millions in computing expenses, highlighting the high energy demands of generative AI. Despite this, experts note that polite prompts can improve AI interactions. Surveys show most users are courteous to AI—sometimes out of habit, but also due to concerns about possible future repercussions.
An AI Customer Service Chatbot Made Up a Company Policy—and Created a Mess
WIRED
Benj Edwards
An AI-powered support bot for the code editor Cursor falsely claimed a new policy restricting multi-device use, prompting user backlash and subscription cancellations before the company clarified the mistake. The incident highlights the business risks of AI "hallucinations," where chatbots invent plausible but untrue information, especially when not clearly identified as non-human. The episode underscores the need for transparency and oversight when deploying AI in customer-facing roles.
Anthropic mapped Claude's morality. Here's what the chatbot values (and doesn't)
ZDNET
Radhika Rajkumar
Anthropic analyzed hundreds of thousands of interactions with its Claude AI chatbot to map how it expresses and prioritizes values in real-world use, finding that Claude often mirrors user values but also upholds core principles—especially when challenged. The study highlights Anthropic’s emphasis on transparency and safety, contrasting shifting industry norms, and offers new data and tools for researchers to monitor and improve AI behavior and harm mitigation.
AI Regulation and Policymaking
Why Colorado’s artificial intelligence law is a big deal for the whole country
The Colorado Sun
Tamara Chuang
Colorado’s pioneering AI law, designed to protect consumers from discrimination in critical areas like hiring and housing, is facing intense debate as tech industry leaders push for revisions, fearing it could stifle innovation and burden businesses. Consumer advocates, meanwhile, argue the law doesn’t go far enough. With the legislative session ending soon, the fate of the law—and its potential influence as a national model—hangs in the balance amid calls for compromise.
Pa. Lawmakers Look to Set Guidelines on Safe AI Development
GovTech
Jaxon White
Pennsylvania legislators are intensifying efforts to craft AI regulations that balance innovation with consumer protection, following high-profile misuse cases and rapid technological advances. Lawmakers are consulting industry and advocacy groups, holding statewide hearings, and considering a range of proposals—from deepfake bans to AI disclosure requirements—while aiming to avoid overly restrictive rules. The goal is to foster economic growth, safeguard citizens, and position Pennsylvania as a leader in responsible AI adoption.
Taking the Fight for Equality into the AI Era
Harvard Magazine
Tamara Evdokimova
The “Gender and AI: Promise and Perils” event sponsored by the Radcliffe Institute Harvard Kennedy School Women and Public Policy Program highlighted how AI development often lacks democratic input, leading to systems shaped by the priorities of a few powerful companies and perpetuating bias, especially against women and marginalized groups. Speakers emphasized the need for consent in data use, showcased AI tools addressing gender disparities, and discussed regulatory efforts. The consensus was that greater diversity in AI design and funding is essential for creating fairer, more inclusive technology.
California SB 813 Proposes Landmark Safe Harbor for AI Development Through Certification
JD Supra
California’s SB 813 proposes a novel regulatory model for AI, empowering private organizations to certify AI systems’ safety and compliance under government oversight. This framework aims to balance innovation with public trust by offering legal protections to certified developers, focusing on high-risk AI uses, and encouraging industry input. The bill could influence national and international standards, making early stakeholder engagement and certification readiness critical for AI developers and investors.
AI News from Other Fields
AI Will Reshape, Not Replace, the Role of Customer Service Agents, Says Info-Tech Research Group
AP News
AI is poised to fundamentally transform customer service by enabling real-time personalization, smarter agent assistance, and more natural voice interactions, addressing long-standing inefficiencies and dissatisfaction. Rather than replacing agents, AI will automate routine tasks, reduce burnout, and allow staff to focus on complex, empathetic issues. Organizations that fail to adopt AI risk falling behind, as customer experience platforms increasingly rely on adaptable, model-agnostic AI to deliver superior service and satisfaction.
How a Photojournalist Used Generative AI to Illustrate a Classic Story
PetaPixel
Ken Klein
Sankha Kar, a photographer and Knight Fellow, used generative AI tools to create culturally accurate illustrations for Rabindranath Tagore’s classic Bengali story "Subha," highlighting the current limitations of AI in authentically representing non-Western cultures. Through extensive historical research and tailored prompts, Kar demonstrated how thoughtful human guidance can help AI better visualize diverse stories, suggesting that such efforts will drive progress in making AI-generated art more inclusive and historically precise.
The AAA-ICDR and AAAiLab do not endorse, control, or assume responsibility for the accuracy, availability, or content of any summaries, articles, or external sites or links referenced in this newsletter. Neither the AAA-ICDR nor AAAiLab are compensated or otherwise incentivized by any third parties for including content in this newsletter. All copyrights, trademarks, and other content referenced in the articles or article summaries remain the property of their respective owners. If you believe we have linked to material that infringes your copyright rights, please contact us at institute@adr.org, and we will review and address the matter promptly. This newsletter does not constitute legal advice.
The AAA-ICDR respects your privacy. If you no longer wish to receive these emails, you may unsubscribe at any time by clicking the link below.
We also do not share your personal information without your consent. For more information, please refer to the AAA-ICDR’s full privacy policy.


