I’ll be picking up the law and business thread again next week with a humdinger post about the history of the billable hour. So this week I’ll keep it short and feed the content machine with some off-the-cuff predictions about the future of AI.
Cal Newport, who I've written about before, just published some time-stamped predictions about work and AI. These are just one man's predictions and no one is an oracle. But as my spouse can attest, I place great faith in Newport and his thoughts about knowledge work, so they're worth highlighting.
Newport makes several predictions, but I want to focus on what he says about search. Rather than using generative AI for text production—the task everyone thought generative AI was well-suited for—Newport argues that "smart search"—using AI to search the internet and get a written summary—is the breakthrough general use case. Newport cites to survey data to make his point, and his observation tracks my own experience. Although I’m not a particularly heavy user of consumer generative AI products, I have been using Gemini Deep Research1 occasionally for blog-related research. I review the text output, but I don't use it; the value comes from the dozens of links to articles which I may never have found after hammering away with different Google searches. Alternatively, if the tool ends up linking to articles I've already found, I can feel confident that I've exhausted the limits of public Internet search.
Reprising my concept of "vibe research," Newport's argument applies seamlessly to legal AI. When I got access to generative AI at work, I didn't start using it to draft fully-formed briefs. But I did start, and never stopped, using the "Westlaw Precision" search tool, which allows the user to write a legal research question in natural language format. The tool returns an "answer" in the form of a 3-4 paragraph memo and links to relevant cases. But as with Deep Research, I quickly read and discard the written portion. I keep coming back because it provides the links: I can now ask a research question in the exact same way I would phrase it to a colleague and the machine does the searching for me.
Right now, the rest of the Westlaw Precision experience is clunky, and after my initial query I usually end up clicking through cases as I did with the old Westlaw. But I am confident that in 2-3 years a legal technology company (possibly Westlaw) will create a "smart legal research" tool that allows the user to manage the research process with natural language input. Hearkening back to what I wrote in the vibe research post, I predict that this will further "commoditize" legal research skill. 10 years ago, when I started law school, knowing how to write a good Boolean search query and navigate the West indexing system were valuable skills to have, and you could differentiate yourself by finding the right cases more quickly and reliably. That probably will not matter in five years.
In a static world, it would be easy to celebrate or dread this outcome, either because law practice will become more humane or because AI will "replace" all law firm associates. But we don't live in a static world, and I think this change in legal research will have profound second-order effects on legal writing, and thus on the litigation system as a whole.
Whenever I read "old" cases from before the 1990s, I am always struck at how few citations there appear to be. From a 2020s lens, it's embarrassing: Professional judges and lawyers will just assert points of law, with nothing to back them up! A typical opinion or brief today usually includes multiple cites to back up any point of law. But it makes sense when you consider that whoever was researching the opinion had to look up every case in a physical book.
The physical research constraint leads to a different cognitive process. In the old days, it really did resemble research; you start with a question and you figure out an answer. Recently, I've noticed—either because it happens more or because I'm paying attention—that a supervisor will come to me with an answer in mind ("find me a case that says..."), and ask me to find the case that fills in the blank. The "smart research" shift is only going to accelerate this tendency.
And indeed, one tech company that focuses on AI for legal research, Midpage, now has a "proposition search" tool:
Instead of typing in keywords or booleans, you’ll be able to type in the legal proposition you want support for. The full sentence, just as you would write it in a brief, memo, or email. Every search result will come with the sentence that matches your proposition highlighted, even if the relevant quote doesn’t include the same phrasing or keywords.
Wild stuff! However, rather than suddenly making every litigator more productive, I think this will accelerate another trend I have noticed, which is that many legal briefs are no longer arguing about points of law, where both sides agree on the relevant cases and debate the correct application. Instead, each side shapes the argument around its preferred list of cases. Because there are more cases overall, it's easier to slip in disingenuous citations, citing 3-4 cases that actually do not stand for the proposition the lawyer says they do. Rather than a reasoned debate, the contest often feels more like two elementary school students competing to raise their hands to get called on first by the teacher.
I don't know the future, but it seems smart legal research will only accelerate this trend, and the system will have to adjust to survive the deluge.
By all accounts, Gemini Deep Research is not as good as OpenAI Deep Research, but the former is free.