Last week, the CEO of DoNotPay, Joshua Browder, announced that the company’s AI chatbot would be representing a defendant in a U.S. court, marking the first use of artificial intelligence for this purpose. However, the experiment has now been cancelled, with Browder stating that he has received objections from multiple state bar associations.
The plan was to use DoNotPay’s AI chatbot in a speeding case scheduled to be heard on February 22nd. The chatbot would run on a smartphone and provide instructions to the defendant via an earpiece during the trial. However, numerous state prosecutors did not respond well to DoNotPay’s proposed experiment, writing to Browder to warn him that it would potentially be breaking the law. Specifically, Browder may be prosecuted for unauthorised practice of law, a crime which could put him behind bars for half a year in some states.
In light of this, Browder decided to cancel the experiment rather than risk jail time. He stated, “Even if it wouldn’t happen, the threat of criminal charges was enough to give it up.”
This near miss with the wrong side of the law has also led DoNotPay to reassess its products. Previously, the company offered computer-generated legal documents for a wide variety of issues, covering everything from child support payments to annulling a marriage. However, Browder has now announced that DoNotPay will only deal with cases regarding consumer rights law going forward, removing all other services “effective immediately.”
The CEO also stated that employees are currently working 18-hour days to improve DoNotPay’s user experience, which doesn’t seem like something to boast about.
This incident raises questions about the use of AI in the legal system. While AI may seem like an exciting technology with many useful purposes, it is important to consider the potential flaws and biases that may exist in these systems. It is also important to consider the ethical implications of using AI to represent defendants in court.
States such as New York and California have previously used the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) AI tool to assess whether someone is likely to reoffend, taking it into account when determining bail. However, a 2016 study by ProPublica found that COMPAS is more likely to falsely score Black defendants as higher risk, while also falsely marking white defendants as lower risk.
It is clear that while AI may have a role to play in the legal system, it is important to proceed with caution and consider the potential consequences before implementing these technologies. It is always best to rely on the knowledge and expertise of human professionals when dealing with important legal matters.