Learning the Law with AI: Why Law School Students Are Tentative about Using ChatGPT

BySerena WellenPublished inAnalyses & TrendsJune 2nd, 2023

Generative AI is changing the way people work across every industry and profession. While the application looks different in each context, professionals are largely using generative artificial intelligence products in three main ways: to conduct research and sort through large amounts of data, to generate content and ideas, and to accelerate manual tasks (i.e. writing emails or coding).

The same is true for lawyers who – in a recent survey conducted by LexisNexis that polled lawyers, law students and consumers – reported using ChatGPT and other large language models for research (59%), increasing efficiency (54%), writing emails (34%) and drafting documents (45%).

But, while generative AI continues to grow in popularity within the legal community, some are concerned about the rate at which it’s being worked into legal practice – and the lack of checks and balances on the way it’s being used.

Surprisingly, law students are among the most apprehensive – particularly when it comes to the way law is taught. Only 9% of law students surveyed said they are currently using generative AI in their studies and only 25% say they have plans to eventually incorporate it into their work.

Law students generally represent a younger generation – a generation that is digitally literate from a young age and highly receptive to new technology – making their resistance to AI unusual, but all the more telling.

I found it fascinating to hear what law students are saying about the shortcomings of generative AI in the context of law school and early career progression.

They’re Concerned About The Accuracy and Validity of Research

Generative AI products and large language models are only as good as the data they’re fed. ChatGPT was trained on open-web content, which is noisy, unreliable, and broad (and does not extend beyond 2021), meaning responses are often error-ridden and outdated when used for law school research.

“I have seen instances where ChatGPT will completely make up studies and false information and present that information in a very convincing manner,” said one first-year law student. “I’m not sure that I would necessarily trust it to do its own research.”

A 3L shared a similar sentiment, explaining that “ChatGPT has repeatedly shown itself to report serious inaccuracies as fact, and can be led to say almost anything with the right prompt; it will even produce fake citations to go with its false claims.”

“I fear the ease of use and ChatGPT’s ability to produce plausible-sounding lies will lead people to take its word as fact and without dutifully checking the accuracy of everything it says,” she added.

In its current state, generative AI is prone to mistakes, and – worse – can make up life-like responses based on that erroneous data, leading students astray when conducting legal research.

They’re Worried About Academic Integrity

Many students fear ChatGPT will fuel plagiarism, cheating, and other forms of dishonesty within an academic context – pointing to the ways in which many honor codes and other law school policies do not yet account for generative AI.

“I foresee issues with academic integrity as the more competitive or anxious students may try to use AI in ways that are not expressly prohibited in order to get ahead,” warned one second-year student. “This will require law schools to adapt their academic integrity policies to the changing landscape.”

Not only will academic institutions have to change the way they govern students, but professors will also have to transform the way they teach in order to maintain academic rigor.

“Professors will have to wrestle with the ease of access it grants to students and learn how to increase rigor in other ways,” explained a 1L. That might mean putting an end to take-home exams or putting further restrictions on seminar papers.

But, some students see an opportunity for professors to embrace ChatGPT and teach it as a tool for efficiency – showing students its explicit limits when it comes to thinking critically and producing quality legal work.

“I’d personally like to see an approach where its use is at times encouraged and forbidden at other times so students can learn a useful tool while still knowing how to think for themselves,” said an optimistic 3L.

They’re Convinced It Will Inhibit Critical Thinking

Practicing law requires drawing connections which might not be obviously related. Lawyers need to be comfortable operating in this gray area by the time they graduate – and need to know how to apply concepts outside of law to the facts (psychology, historical context, societal implications, and more).

Many students believe AI – in its current form – does not have the ability to understand and employ the nuance that is required with effective legal work. The result: increased efficiency but a decreased quality in work product.

“I think AI can help lawyers spot issues and develop arguments, but I also think the use of AI could cause lawyers to be lazy and not properly do their work,” said one 3L that reported no plans to use AI at this time.

“From my experience it does a good job on drafting correspondence and overviews but sometimes misses the mark on the law,” another 3L agreed. “There is quite a bit of gray area when applying the law to specific sets of facts that AI just doesn't do a great job with yet.”

In order to develop competent, skilled lawyers, law schools need to make clear to students that AI is not a way to learn the core competencies of practicing law–like complex legal analysis, crafting novel arguments, and drawing connections between authorities which may not be obviously related. Rather, it’s a way to make repetitive, manual tasks more efficient so lawyers can go deeper on those core competencies.

They’re Worried About Career Paths

Generative AI automates many of the responsibilities associated with entry-level associate positions. As such, students – who have paid large sums of money for law school – are worried about the return on their investment now that ChatGPT is becoming more commonplace at law firms.

Many wonder how they will get their start in their careers if generative AI takes over. Entry-level positions are an opportunity for new lawyers to further their education and apply the concepts they learned in school to real-life situations. If ChatGPT continues to grow in influence, students may not have access to that critical, hands-on, practical learning that comes in those first few years of a law career.

A second-year student said it best, asserting that “generative AI could help reduce the amount of busy work that attorneys have to complete, which would enable them to spend more time on client interactions and developing their conceptual approach to the issues in the case. However, completing busy work could help new attorneys, fresh out of law school, to begin developing the skills necessary to being a successful attorney in actual practice.”

Looking Ahead

Law students represent a group that is actively learning the mechanics of law: how ideas are generated, executed, and documented within a legal context. That means, their concerns about generative AI can offer eye-opening insights into its shortcomings for practicing lawyers.

Ultimately, their concerns point to one thing: lawyers should use ChatGPT with caution.

In its current form, generative AI is not a reliable legal tool. It needs more thorough testing and further updates before it can adequately service lawyers with accurate information. The legal community needs to have more conversations around its ethical limitations and inherent biases before moving full-steam ahead. There needs to be more regulation and standardization around its use-cases to limit costly mistakes and professional liability.

And, coming out of this work, there needs to be agreement around how it's taught – as a tool – in law school.

Featured image by Alexandra_Koch via from Pixabay


Tags
Analyses & Trends
Share
Senior Director of Product Management, LexisNexis North America
Related Products
vLexProfile Image
vLex
(1)
Related Posts


Robert Ambrogi
December 11th, 2024