AI Lawyer Fiasco: Man’s Courtroom Avatar Sparks Judge’s Fury!

@lawguide.co.ke

Jerome Dewald, a 74-year-old man, attempted to use an AI-generated avatar to argue his case in the New York State Supreme Court Appellate Division's First Judicial Department. This was part of an employment dispute with MassMutual Metro New York, and Dewald was representing himself due to a lack of legal representation. Dewald used an AI service called Tavus to create a digital avatar, intending to accommodate health issues from past throat cancer that made extended speaking difficult. However, the avatar, which ended up being a default character named "Jim" due to limitations in his subscription plan, confused the court. Justice Sallie Manzanet-Daniels expressed strong disapproval, interrupting the hearing and accusing Dewald of misleading the court, as he had not disclosed the AI nature of the presentation in advance. The judges were upset, and Dewald later apologized, clarifying he had no intent to deceive and believed he had obtained permission. This incident has sparked discussions about the integration of AI in legal settings, with some seeing it as empowering for unrepresented litigants, while others, including the court, view it skeptically due to risks like misinformation. #Law #lawyer #ailawyer #muturaisafruit #trump #usa #trending #viral

♬ original sound - LawGuide KE
Recently, Jerome Dewald, a 74-year-old individual, made headlines by employing an AI-generated avatar during his court appearance at the New York State Supreme Court Appellate Division's First Judicial Department. This event, part of an employment dispute with MassMutual Metro New York, has ignited discussions about the role of artificial intelligence in legal proceedings, particularly for pro se litigants. The incident highlights both the potential and pitfalls of integrating AI into the justice system.

Dewald, who operates Pro Se Pro, a service aimed at assisting unrepresented litigants, chose to use an AI avatar due to personal health challenges. Specifically, he cited a medical condition from throat cancer 25 years prior, which made extended speaking problematic. The AI service used was Tavus, which generates realistic video avatars based on provided video footage. The process, as described by Dewald, involves:

  1. A 2-4 minute video of the subject talking.
  2. A 1-minute segment of the subject standing still.
  3. A generation time of 2-4 hours.

However, due to limitations in his basic plan (allowing only three replicas a month), Dewald could not generate a personalized avatar and instead used a default Tavus avatar named "Jim." This avatar, described as a digital image of a younger man in a blue collared shirt and beige sweater with a blurred virtual background, was presented during the hearing, leading to confusion and disapproval from the court.

Associate Justice Sallie Manzanet-Daniels was notably displeased, interrupting the proceedings with statements such as, "It would have been nice to know that when you made your application," and "I don’t appreciate being misled." She ordered the video turned off, reflecting the court's unpreparedness for an artificially generated image. This reaction is consistent with broader judicial skepticism toward AI.

Dewald insisted he had sought and received permission in advance to use the AI avatar, a claim the court disputed. In subsequent statements, he apologized for not being fully transparent. He emphasized his lack of intent to harm, stating, "I had no idea that this would cause such a reaction," and highlighted his engineering and computer science background, noting he is not a lawyer but has been involved in legal education, having sat a law school admission test recently and being a member of some bar associations.

Dewald's use of AI is part of a broader trend, with increasing instances of AI in legal proceedings, often leading to controversy. For context, recent cases include:

Incident Date Location Details Outcome
2023 Manhattan, NY Two lawyers fined $5,000 each for citing fictitious cases generated by ChatGPT. Fines imposed, judicial rebuke.
2024 Vancouver, BC Lawyer Chong Ke investigated for using ChatGPT to cite non-existent cases in a custody dispute. Investigation ongoing, court documents cited discrepancies.

 

Watch the full incident here