Skip to Main Content

AI for Legal Research: What is AI?

Similar Research Guides

Artificial Lawyer RSS Feed

Loading ...

What is AI?

Artificial Intelligence, or AI, is a technology that seeks to simulate human intelligence, language, and problem-solving capabilities. AI is integrated into many aspects of technological life, such as spam filters for email, translators, self-driving cars, and virtual assistants (Siri, Alexa, or Google Assistant). AI is neither inherently malicious nor harmless, and like many tools is dependent on the intent of users and creators (Copeland, 2024). 

This guide will focus on language learning models, or large language models, and the relevance and incorporation into legal research and practice. 


B.J. Copeland, Artificial Intelligence (A.I.), Encyclopedia Britannica (2024), https://www.britannica.com/technology/artificial-intelligence.

Language Learning Models (LLMs)

A language learning model or sometimes known as a large language model (LLM), is a type of AI that mimics language and speech patterns. LLMs are machine learning models that aim to predict and generate plausible language (Google Machine Learning, 2023). 

LLMs can summarize, answer questions, and organize information by mimicking human speech patterns. Common Academic tools like Grammarly are powered by LLMs, such as ChatGPT.

IBM, What Is AI?, What Is Artificial Intelligence (AI)?, https://www.ibm.com/topics/artificial-intelligence.

Alphabet, Introduction to Large Language Models, Machine Learning (2023), https://developers.google.com/machine-learning/resources/intro-llms (last visited Sep 5, 2024).

 

Aspects of Using AI

Prompting:

Can be thought of as queries, topics, or questions directed at an LLM. Interactive prompting, which can be seen as a give-and-take method, allows LLM to better understand and refine the query. Asking a single vague question, without any examples or further iterations, to an LLM can result in vague or basic results. It is best to have multiple iterations in a prompt with a LLM preferably using other sources (to be verified by you), examples, and asking the LLM to refine the results. The AI & Legal Research tab has more tools and explanations on how best to prompt legal LLMs.

Hallucinations:

Hallucinations "are the tendency of LLMs to produce content that deviates from actual legal facts or well-established legal principles and precedents" (Dahl, Magesh, Suzgun, & Ho, 2024). Legal LLMs have a tendency to hallucinate in two ways: an incorrect response or a misgrounded response. An incorrect response describes the law incorrectly or states a factual error.  A misgrounded response describes the law correctly but cites a source that does not support its claims (Magesh, Surani, Dahl, Suzgun, Manning, & Ho, 2024). All tools that use AI, including proprietary products such as Lexis+ AI and Westlaw, have the potential to hallucinate. It is the responsibility of the user to verify the claims made by AI.

Temperature: 

Temperature and Top P are two variables that can be adjusted for LLM. The higher the temperature, the more likely the LLM is to hallucinate and produce results that are misgrounded, or not rooted in fact. The lower the temperature, the more likely the LLM will give a grounded result based in fact.


Matthew Dahl et al., Hallucinating Law: Legal Mistakes with Large Language Models Are Pervasive, (2024), https://hai.stanford.edu/news/hallucinating-law-legal-mistakes-large-language-models-are-pervasive

Varun Magesh et al., AI on Trial: Legal Models Hallucinate in 1 out of 6 (or More) Benchmarking Queries, (2024), https://hai.stanford.edu/news/ai-trial-legal-models-hallucinate-1-out-6-or-more-benchmarking-queries

 

Data & Privacy

When using LLMs, exercise caution when using personal identifying information (PII). Some OpenAI models use prompting and interactions to train their models, and so do some proprietary models. There's no guarantee that the data, documents, or other information that is inputted into any sort of GenAI will be completely protected. Before using these products, read the privacy statements, which are usually available at the bottom of the homepage.

For example, as of August 2024, Westlaw Precision's privacy policy states for user contributions & user content that:

"Personal information in content and communications uploaded, sent, shared, or inputted through our Services or via our networks and infrastructure, including feedback you provide to us and the content of communications between you and us or sent via our network and Services--including the content of your queries on our Services, such as artificial intelligence prompts."

Each AI, whether proprietary or open, will have a privacy and data statement. When using AI for assignments there is always a possibility of AI repackaging the work of others which can lead to accusations of plagiarism if not cited correctly. 


Jeremy White, How Strangers Got My Email Address from ChatGPT’s Model, New York Times, Dec. 22, 2023, https://www.nytimes.com/interactive/2023/12/22/technology/openai-chatgpt-privacy-exploit.html?

Eli Tan, When the Terms of Service Change to Make Way for A.I. Training, New York Times, Jun. 26, 2024, https://www.nytimes.com/2024/06/26/technology/terms-service-ai-training.html.

Privacy Statement, Thomson Reuters (2024), https://www.thomsonreuters.com/en/privacy-statement.html#CollectProcess. Accessed August 5, 2024.