During a civil procedure in the Court of Florence's Business Section, a defense lawyer included in his judicial brief a series of references to Court of Cassation rulings that did not exist. The judges determined that the citations were obtained through ChatGPT, which the lawyer used to speed up legal research. According to what was discovered, generative artificial intelligence would have produced plausible but entirely generated references, which the lawyer then included in the document without validating their legitimacy. The other party condemned the professional's alleged "bad faith," accusing him of irresponsible litigation, i.e., intentionally entering incorrect information in order to gain a procedural advantage. The lawyer defended himself by attributing the error to one of his partners, but the case has brought to light an increasingly pressing issue: the risks of using generative artificial intelligence in highly specialized professions where information quality is critical. The judges revealed the "phantom rulings" with a simple check, indicating that, like humans, AI also makes mistakes, or even makes things up entirely.
|