Thus artificial intelligence changes American justice

Thus artificial intelligence changes American justice

There are numerous law firms in the United States that are trying to experiment with advanced technological tools to speed up their activities: the first incidents have already arrived. In a world like that of AI, it's easy to become dinosaurs

While we are still discussing it, in the USA, as often happens, the new technologies of Artificial Intelligence (AI) they have already arrived on the market, in different sectors and in a pervasive way. Among the most rapidly adopted applications, there are those in a sector which is usually thought to be more conservative and less inclined to novelties: AI technologies have opened a new frontier and in the USA lawyers and law firms are experimenting with the most recent tools. For example, legal AI startup Harvey recently raised $21 million in funding from investors, following an initial $5 million in funding a year ago. Sequoia Capital, which is leading the first round of funding, said more than 15,000 law firms are on waiting lists to start using Harvey; this, of course, has attracted investors of all sorts, such as the OpenAI Startup Fund, Conviction, SV Angel and Elad Gil.

The purpose of the financed company is to create large language models for law firms. These models are trained on large, customizable datasets to produce text for generating draft contracts, legal reviews, and other types of documents, including draft arguments for use in court. Already today, these tools have shown that they can pass the written test for the bar exam in the USA. Global law firm Allen & Overy said in February that 3,500 of its legal staff would use Harvey to automate the drafting of certain documents and the search for documents and texts useful for legal analysis. Similarly, in March, accounting giant PricewaterhouseCoopers said it would give 4,000 legal professionals access to the platform.

Harvey is just one of the more advanced AI tools available to law firms: CaseText, another company, released its CoCounsel product in March, which uses GPT-4 for the same purpose again, i.e. speed up tasks such as legal research, contract analysis and drafting, and document review. According to the company statements, CoCounsel can examine legal texts for their internal consistency and completeness, but can also answer complex legal questions in a natural way, providing a sort of "automatic opinion" complete with appropriate references on topics such as lack of jurisdiction. Thousands of lawyers in important American law firms already use CoCounsel today, with a level of satisfaction that is not yet known, but which, judging by the rapid diffusion of the tool, could be quite high. In the face of specific products created by dedicated companies, there are also numerous law firms that are trying to develop "in-house" solutions. For example, Holland & Knight is creating an AI tool it hopes will help lawyers review and amend credit agreements, partner Josias Dewey said. Faced with this effervescence in the development and adoption of new generative AI tools, it is legitimate to ask ourselves a few questions about the consequences that could arise and the effects on a legal system such as the American one.

The first incident occurred less than a month ago: an experienced American lawyer, Steven Schwartz, admitted using ChatGPT in a personal injury case against Avianca Airlines. The documentation produced by the lawyer using the generative AI cited six non-existent judicial decisions: as is known, generalist linguistic models, such as ChatGPT (but not exclusively that), are subject to “hallucinations”, i.e. they construct semantically and grammatically correct texts from a formal point of view, but completely devoid of factual evidence, because, especially in the case of first generation tools, they are optimized to imitate human language, not to verify the contents of what they produce. Schwartz is at risk of fines; meanwhile, a federal judge in Texas has ordered lawyers involved in cases under his jurisdiction to certify that any document produced by AI undergoes human review to verify its accuracy on time. "These platforms in their current state are prone to hallucinations and erroneous conditioning. They make up facts and quotes," the judge wrote, also stating that while lawyers take oaths to uphold the law and represent their clients, AI platforms obviously they are not bound to do so.

A second American judge, this time belonging to the US Court of International Trade, has in turn issued an order, in which the same concept is taken up regarding the necessary declaration by lawyers, but in addition it underlines how currently used AI tools do not guarantee that lawyers do not disclose confidential information when preparing legal documentsthus leading to further damage due to their communication with central servers, which cannot be controlled in terms of confidentiality, and being able to reuse the information entered during their use as a source for answers given to third party users.

As always, in the US we are learning by doing, rather than using prudence and limited experimentation before launching potentially distorting instruments on the market; this is why the draft regulation on AI prepared by the European Parliament called the AI ​​Act (Aia) represents an attempt at preventive regulation which, if it is not excessively restrictive or complex in its implementation, puts our continent at the forefront in the prevention of certain possible damage. This attempt, both for its intrinsic interest and to prevent it from becoming a brake, must be studied and investigated as soon as possible by those who have the expertise to do so; because in a world like that of AI technologies, it's easy to become dinosaurs.



Source link