This human-curated, AI-generated newsletter from the AAA-ICDR Institute and AAAiLab keeps you up on AI news over the past week that is relevant to alternative dispute resolution (ADR).
AI in ADR and Legal Services
Judges Who Benefit From AI Technology Must Avoid Its Hazards
Bloomberg Law
Gary Marchant
As courts increasingly adopt AI to improve efficiency, they face new risks—especially the danger of generative AI tools producing false or fabricated legal citations that can slip into judicial opinions. Recent incidents where judges unknowingly included such errors highlight the need for rigorous verification of AI-generated content. Ensuring accuracy is essential to preserve public trust in the judiciary, even if it adds to judges’ already heavy workloads.
AI action plan for justice
UK Ministry of Justice
The Ministry of Justice’s AI Action Plan aims to transform England and Wales’ justice system by responsibly integrating AI to streamline operations, reduce administrative burdens, and improve access to justice. The plan focuses on building robust governance and infrastructure, embedding AI across services—from staff productivity tools to citizen-facing assistants—and investing in workforce skills and partnerships. Emphasizing ethical use and public trust, the strategy seeks to boost efficiency, fairness, and innovation while supporting economic growth.
Survey of Managers Highlights The Widespread Use and Potential Risks of Unsanctioned AI Use
Proskauer—Law and the Workplace
Guy Brenner, Jonathan Slowik
Many managers are using AI tools—often without formal training or oversight—to make important decisions about their employees, such as promotions or terminations. This widespread, sometimes unsupervised use of general-purpose AI chatbots in HR processes raises significant legal and ethical risks for employers, including potential violations of employment laws and exposure to discrimination claims. Clear guidelines, proper vetting, and employee training are essential to minimize these dangers.
Agentic AI in Legal and the Critical Need for Human Supervision
LawSites
Ken Crutchfield
OpenAI’s new ChatGPT Agent introduces AI capable of autonomously managing tasks and connecting with external systems, signaling a shift from simple assistance to active workflow management in legal settings. While this could revolutionize legal operations, its success hinges on robust human oversight, updated governance, and strong cybersecurity measures to mitigate risks and ensure reliability as firms integrate these powerful tools.
Generative AI and LLM Developments
Amazon's AI coding assistant exposed nearly 1 million users to potential system wipe
TechSpot
Skye Jacobs
A hacker exploited weak code review processes to insert dangerous instructions into Amazon’s AI coding assistant, potentially threatening user files and cloud data. Though the malicious code was intentionally defanged, the incident highlights how poor oversight of AI tool integration can create serious security vulnerabilities. The episode has sparked calls for stronger safeguards and transparency in managing AI-powered software development, underscoring the risks of rapid, unchecked adoption.
Zuckerberg claims ‘superintelligence is now in sight’ as Meta lavishes billions on AI
The Guardian
Johana Bhuiyan
Meta is aggressively investing in AI, recruiting top talent, acquiring startups, and massively expanding infrastructure to pursue “superintelligence” and personalized AI tools. Despite soaring costs, its AI-driven advertising business continues to outperform expectations, boosting investor confidence. Spending is set to rise further as Meta scales up data centers and hires more technical staff, with the company betting that these investments will secure a leading role in the future of advanced AI.
To explore AI bias, researchers pose a question: How do you imagine a tree?
StanfordReport
Katie Gray Garrison
Researchers argue that efforts to address bias in large language models must go beyond value alignment to include ontological considerations—the fundamental assumptions about what exists and matters. Their analysis reveals that AI systems often embed and reinforce narrow, culturally specific worldviews, limiting representation and imagination. They call for new frameworks that critically examine these underlying assumptions throughout AI development to ensure more inclusive, expansive, and human-centered technologies.
No more links, no more scrolling—The browser is becoming an AI Agent
VentureBeat
Taryn Plumb
Generative AI is set to transform web browsing by enabling intelligent agents that not only find information but also complete tasks and anticipate user needs, challenging the traditional search engine model. While OpenAI and competitors are building AI-powered browsers that could disrupt how users interact with the internet, Google’s dominance and ecosystem remain formidable barriers. Enterprises must adapt by making content AI-friendly, prioritizing brand authority, and preparing for a future where users delegate rather than search.
Alibaba to launch AI-powered glasses creating a Chinese rival to Meta
NBC10 Philadelphia
Arjun Kharpal
Alibaba is entering the smart glasses market with Quark AI Glasses, powered by its proprietary Qwen language model and Quark assistant, aiming to rival products from Meta and Xiaomi. The glasses will offer features like hands-free calls, music, real-time translation, meeting transcription, navigation, and integration with Alibaba’s payment and e-commerce services, reflecting a broader push by tech companies to make wearables a key computing platform.
Hiding secret codes in light can protect against fake videos
NewsBreak
Patricia Waldron
Cornell researchers have created a novel technique to embed hidden watermarks in lighting, enabling any video recorded under these conditions to be authenticated and checked for tampering. This approach, effective even with off-the-shelf lighting, encodes secret time-stamped data into subtle light fluctuations, making it difficult for bad actors to fake or manipulate footage—even with generative AI—without detection, thus offering a new tool in combating video-based misinformation.
The Real Demon Inside ChatGPT
WIRED
Louise Matsakis
AI chatbots like ChatGPT can generate alarming or misleading responses when they lack proper historical and cultural context, as shown when it produced ritualistic content referencing demonic themes. Much of this language was traced to fantasy game lore, not real-world practices, highlighting how AI’s training data—often sourced from internet subcultures—can resurface in unexpected ways if safeguards fail to recognize or contextualize it accurately.
AI Regulation and Policymaking
What Comes Next in AI Regulation?
Lawfare
Kevin Frazier
The AI Action Plan marks a shift in U.S. policy toward fostering rapid AI innovation while adding targeted safeguards, aiming to maintain global leadership amid competition with China. Its broad, bipartisan support reflects a new consensus that favors regulatory "speed bumps" over development pauses. However, the plan’s ambitious scope, unclear prioritization, and reliance on stretched institutions raise questions about its long-term effectiveness and resilience to shifting public and political pressures.
Artificial Intelligence: Generative AI Use and Management at Federal Agencies
U.S. Government Accountability Office
From 2023 to 2024, federal agencies’ generative AI use surged, expanding from 32 to 282 reported applications. These tools support medical imaging automation, disease tracking, and communication improvements. However, agencies face obstacles including outdated federal policies, insufficient technical resources, and difficulties keeping pace with evolving technology. To mitigate these issues, agencies are adopting AI frameworks, collaborating across departments, and updating internal policies to align with 2025 federal guidance revisions. Despite the promise of productivity gains, risks like misinformation and national security threats remain pressing concerns.
China pitches global AI governance group as the U.S. goes it alone
AOL / CNN
Rebecca Cairns
China has proposed a global framework for AI governance, urging international cooperation to prevent technological monopolies and ensure shared benefits, as rivalry with the U.S. intensifies. Despite lagging behind in private investment, China’s rapid innovation, government backing, and prolific patent output are narrowing the gap. The World AI Conference in Shanghai showcased cutting-edge robotics and AI models, highlighting both global competition and calls for collaboration to address AI’s risks and opportunities.
Automation Bias in the AI Act: On the Legal Implications of Attempting to De-Bias Human Oversight of AI
European Journal of Risk Regulation
Johann Laux, Hannah Ruschemeier
The EU AI Act uniquely singles out automation bias (AB)—the human tendency to over-rely on AI outputs—as a regulatory concern, obliging providers to raise awareness of this bias among human overseers of high-risk AI systems. However, the law’s focus on AB is arbitrary, lacks practical enforcement mechanisms, and overlooks other cognitive biases. Harmonized standards that reference current behavioral science may better address the complexity and evolving understanding of human-AI interaction.
AI News from Other Fields
What Happened When I Tried to Replace Myself with ChatGPT in My English Classroom
Literary Hub
Piers Gelly
College students widely use AI tools like ChatGPT for writing tasks, but their experiences reveal both benefits and drawbacks. While AI can help brainstorm and edit, its output often lacks originality and can lead to formulaic, repetitive writing. Human instruction remains valued for fostering critical thinking, personal voice, and deeper learning, though some students see AI as “good enough.” Ultimately, most students still prefer human guidance, despite AI’s growing role in education.
African universities risk being left behind in AI era
Semafor
Martin K.N Siele
Many African universities are struggling to keep pace with the rapid integration of artificial intelligence in education due to a lack of formal policies and limited resources. While AI use is growing among students and faculty, the absence of clear guidelines risks leaving these institutions behind globally. Experts stress that adapting to new technologies is crucial for future job market competitiveness, but financial constraints remain a significant barrier.
Mayo, Nvidia launch AI supercomputer to diagnose diseases more quickly
Pittsburgh Post-Gazette / The Minnesota Star Tribune
Emmy Martin, Victor Stefanescu
OpenAI has introduced a new voice engine that can generate realistic human speech from text, raising both excitement and concerns about potential misuse. The company is limiting access to trusted partners while it explores safeguards and authentication tools to prevent voice cloning fraud. This development highlights the growing capabilities of generative AI and the ethical considerations around deploying such powerful technology.
AI Is Disrupting Tech Jobs, Yet Boosting Pay for Skilled Workers In Other Sectors
Black Enterprise
Nahlah Abdur-Rahman
The rise of AI is reshaping the job market, causing layoffs in tech sectors while boosting demand and salaries for workers with AI skills in fields like marketing and HR. Employers increasingly seek candidates with both AI expertise and strong human abilities such as problem-solving and communication. While tech jobs shrink, non-tech roles that incorporate AI are offering higher pay, making adaptability and continuous learning essential for workforce success.
How to Make Sure ChatGPT Recommends Your Products—Not Your Competitor's
Entrepreneur
Tyler Hochman
Artificial intelligence is rapidly transforming online shopping, shifting consumer behavior from traditional search engines to AI-powered chat interfaces that provide instant, personalized recommendations. To remain visible and competitive, brands must adapt by optimizing product data for AI assistants, focusing on clear and conversational listings, and differentiating their offerings. Early adoption of answer engine optimization (AEO) strategies is crucial, as AI-driven discovery is quickly becoming the dominant way people find and purchase products online.
A Tyler, The Creator single ‘leak’ turned out to be an AI-generated fake and points to a whole cottage industry in misleading fans of games, music, and movies
PC Gamer
James Bentley
Generative AI is fueling a surge of convincing but fake music videos and trailers on YouTube, often mimicking popular artists or anticipated releases to attract clicks and views. These AI-generated creations can easily mislead users and rack up significant engagement, despite questionable quality and authenticity. YouTube's vague policies and continued monetization complicate efforts to curb such content, signaling a broader challenge in managing AI-driven misinformation on major platforms.
The AAA-ICDR and AAAiLab do not endorse, control, or assume responsibility for the accuracy, availability, or content of any summaries, articles, or external sites or links referenced in this newsletter. Neither the AAA-ICDR nor AAAiLab are compensated or otherwise incentivized by any third parties for including content in this newsletter. All copyrights, trademarks, and other content referenced in the articles or article summaries remain the property of their respective owners. If you believe we have linked to material that infringes your copyright rights, please contact us at institute@adr.org, and we will review and address the matter promptly. This newsletter does not constitute legal advice.
The AAA-ICDR respects your privacy. If you no longer wish to receive these emails, you may unsubscribe at any time by clicking the link below.
We also do not share your personal information without your consent. For more information, please refer to the AAA-ICDR’s full privacy policy.


