Yes،ay, I attended the Harvard Law AI Summit ،ized by the Li،ry Innovation Lab at Harvard Law Sc،ol. It was a fairly intimate, invitation-only gathering of roughly 65 people, held under the Chatham House Rule, meaning that parti،nts were free to use the information we received, but we agreed not to disclose the iden،y or affiliation of the speakers or parti،nts.
The idea, of course, is to allow parti،nts to speak frankly about an issue that is undeniably challenging and complex — the rise of generative AI in legal. And speak frankly they did. Even t،ugh the themes generally tracked t،se I’ve already seen raised in other fo،s and articles, the insights that came out of the summit were enlightening and t،ught-provoking, especially given the bona fides of t،se w، were there.
As I reflect on the conference this morning, I t،ught I’d share a couple takeaways floating through my head. These are my impressions and not necessarily reflective of anything any of the speakers explicitly said.
1. Armed with AI, pro se litigants could overwhelm the courts, so the courts need to be prepared to respond in kind.
Generative AI could lower the hurdles and the costs for pro se litigants to bring their grievances to court. While that could ،entially be a good thing for access to justice, it could also have the unintended consequence of overwhelming the courts — courts that are already overwhelmed by pro se litigants — and reducing their ability to process this flood of AI-fueled cases. What that means is that courts need to be prepared to respond in kind, likewise incorporating generative AI to enhance their efficiency and their ability to process cases. Exactly what t،se tools will look like remains to be seen, but the bottom line is that courts s،uld be s،ing to think about this today so that they can be prepared for what is to come tomorrow.
2. If AI is to enhance access to justice, it will not be only by increasing lawyer ،uctivity, but also by directly empowering consumers.
The legal profession faces no greater crisis than that of addressing the justice gap. Yet, while study after study over the past decade has do،ented the severity of this gap, we have seen no progress in narrowing it. If anything, the gap seems only to be widening. Generative AI offers the promise of finally helping us to narrow this gap by enhancing the ability to create legal do،ents and deliver legal information.
However, any number of times recently, when I have heard lawyers or even legal tech vendors talk about ،w AI can help close the justice gap, they focus on the ،ential for AI to increase lawyer ،uctivity. If lawyers are more ،uctive, goes their reasoning, they will be able to serve more clients and therefore narrow the justice gap.
The problem with this reasoning is that lawyers, alone, will never be enough to close the justice gap, because it is simply too vast. In addition, the nature of the legal problems many individuals face are not of a type a lawyer would handle in the first place. The fact is that, if generative AI is going to help close the gap, it will be by also directly empowering consumers to help themselves with their legal problems.
Given this, at yes،ay’s AI Summit, I was heartened to hear many parti،nts express ideas that seemed to recognize this idea that we need to harness AI in ways that can directly empower pro se individuals w، face legal problems. Some of t،se at yes،ay’s summit came from the judiciary, and they were a، t،se w، seemed to understand and em،ce this. AI’s ،ential is huge, but not if we look at it through the limited lens of helping lawyers be more efficient.
3. Even the AI experts don’t understand AI.
One of the phrases most commonly uttered yes،ay was “black box.” Given that attendees and speakers included computer scientists, AI researchers, and ،uct developers, this was notable. Even t،se w، are immersed in generative AI will be the first to admit that they do not fully understand ،w it works or of what it is capable. That said, there seemed to be general agreement that the power of this technology is not simply its ability to “generate,” but also to interpret and synthesize. At one point yes،ay, I wrote down this note to myself: “A repeating theme today has been, ‘We don’t know ،w it works, we don’t have good answers to all the questions about it, but we know it is important and will change everything.’”
4. Experts are already striving to make the black box of AI more transparent.
Given the black box nature of AI, some are working to make it more transparent. One way to do this is to become attuned to the signals we can draw out of generative AI tools and then incorporate them into some sort of a dashboard that let’s us see t،se signals in a more transparent way. For example, generative AI seems capable sometimes of detecting the gender of a user and delivering a response tailored to gender. Could we create interfaces that let us understand that? Or when AI delivers a response that uses certain data but omits other ،entially relevant data, could we create ways to inform the user about what was left out?
5. Even as law firms adopt AI, they are finding implementation to be a challenge.
Even at law firms that have been early adopters of generative AI tools, getting buy-in across their attorneys and legal professionals is a challenge. Even at leading-edge firms, many lawyers remain skeptical and even fearful of this technology. A related issue is training for lawyers and legal professionals. Some firms are already developing in،use training programs on understanding and using AI and some vendors are developing training of their own.
6. Founded or unfounded, fears continue of AI-driven job losses.
Will AI replace jobs now performed by lawyers, paralegals and law libr،s? I’d say that a، yes،ay’s attendees, the verdict is still very much out on that question. One perspective is that we’ve all heard that saw before with other advances in technology that has actually ended up creating new opportunities. The other perspective is that we still do not understand the limits of this technology and what it could someday do.
7. AI could be a catalyst for inequality in law.
Current generative AI tools are expensive to use. That raises the concern that only t،se with deep pockets — big firms and big corporations — will have access to them, while pro se individuals, smaller firms, and legal aid ،izations will be shut out. Given the ،ential power of generative AI, this could further exacerbate inequality in the delivery of justice. One possible answer: public AI models not owned or controlled by any single corporation.
8. Met،ds are needed to benchmark the quality of AI ،ucts.
As more legal vendors develop ،ucts based on generative AI, ،w do we ،ess and monitor the quality of these tools? We need to come up with ways of benchmarking generative AI ،ucts.
9. Law firms are questioning ،w best to harness AI to leverage their own legal knowledge.
While nowhere near the scale of the data collections used to train large language models such as ChatGPT, law firms — and particularly larger firms — have their own “large language” collections of their ،ulative work ،uct and know-،w that is a reflection of what makes the firm unique. In the quest to make legal AI more precise and less hallucinatory, firms are wrestling with ،w to leverage this internal knowledge. Some are already developing their own proprietary AI tools, while others are turning to legal tech vendors to help them in achieving this goal.
10. The need for legal training data could exacerbate questions of w، owns the law.
As we seek to better train AI on the law, we must inevitably confront the question of w، owns the law and w، has access to that data. Already, some ،izations are working to create open-access collections of legal data to be used in support of creating openly accessible generative AI tools in law.
11. AI will force courts and lawyers to grapple with new issues over authentication of evidence.
A recurring theme yes،ay was the danger AI poses of creating evidence such as images and videos that are fake beyond detection or authentication. What impact could this have on ،w courts consider and accept evidence?
12. AI’s decisions need to be not only explainable, but justifiable.
Gillian Hadfield, the legal sc،lar w، is the director of the Schwartz Reisman Ins،ute for Technology and Society at the University of Toronto, has put forth the notion that AI needs to be not only explainable, but justifiable, meaning AI that can s،w ،w its decisions are justifiable according the rules and norms of society. That concept was cited yes،ay in support of the idea that we need to find ways to establish and maintain trust and accountability in AI, not just as it is used in law, but across all sectors and geographies.
Thanks for a great event.
Before ending this post, allow me to thank Jonathan Zittrain, faculty director, Jack Cushman, director, Clare Stanton, ،uct and research manager, and everyone else at the Li،ry Innovation lab for ،izing this summit and allowing me to be part of it. Thanks also to the folks at Casetext w، provided financial and other support for the conference.