Welcome to make law. I’m a lawyer, sometimes programmer, and—like many of us—I’m captivated by large language models and have something to say about it. I enjoy learning about and talking about law and AI and, through conversation, realized I may have thoughts that others are interested in hearing. I view make law as an extension of those conversations I'm already having.
More broadly, I want make law to be a place that helps you make sense of how technology--including, but certainly not limited to, AI--affects the practice of law. I plan to keep it boring: You should not come to make law for hot takes on new products, market analysis and predictions, or prescriptive advice about "five ways you should be using AI right now." I hope to avoid "AI Fight Club":
Like so many other disputes, this one was swiftly pulled into the all-devouring maelstrom of what I call “AI Fight Club,” a ritualized combat, waged bout after bout between two highly stylized positions. Position One is that scaling (feeding LLMs more data and more compute) will lead AI to transform everything, replacing knowledge workers with algorithmic processes that effectively automate sentient activity, opening the gate into Artificial General Intelligence and the post-Singularity paradise. Position Two is the counter-claim that scaling doesn’t work, and LLMs are useless but still changing everything for the worse, as management replaces human workers with automated bullshit machines. It’s a no-holds-barred wrestling competition between two starkly opposed perspectives on the world, that keeps on going, and going, and going.
The problem, as the AllDayTA spat illustrates, is that neither of these positions provides a particularly good guide to how the technologies are actually developing.
Both sides of AI Fight Club look foolish because they are both trying to be correct about the future, which seems difficult. Rather than guessing constantly about what's around the corner, I write from the simple premise that the future is uncertain and we understand less about the past and the present than we like to think. By looking at what's already in front of us and trying to understand it further, I think we'll be better off and learn much more (at the very least, we'll be calmer).
A little about me.
I'm a practicing attorney. I currently work at a small (six attorney) litigation firm in Chicago, and I live in Chicago's West Ridge neighborhood, with my spouse and our toddler. Ever since I was a kid, I was a data nerd--which manifested mostly through my interest in baseball and the "Moneyball" revolution that changed the sport in the mid-2000s. For this and other reasons, I was always interested in using programming to do stuff with numbers. As a lawyer, to my surprise, I've enjoyed technical projects more than "traditional" lawyer tasks like writing briefs. I like being the person on the team who can help with technical issues, like getting the right boolean search to sift through many documents. After several previous attempts, this self-realization helped me finally dig in and got serious about learning Python.
As it happened, I started as ChatGPT and other LLM tools broke into the mainstream. For many contingent reasons, Python is the de facto programming language for machine learning, natural language processing, and LLM technology. It's also been clear since ChatGPT broke out that LLMs have the potential to be a *big deal* in law. So to the extent I have any "expertise" here on the topic of law, technology, and AI specifically, it's attributable entirely to timing, dumb luck, and a higher-than-average tolerance for learning just enough linear algebra to understand AI research papers.
My plan.
It's pretty simple right now. I want to publish a post at least once every other week. I may publish more frequently, but through hard-won experience, I'm now a believer in under-promising and over-delivering. I don't have a "beat" I plan to cover, but in general I plan to focus more on the impact of technology on law, than than the impact of law about technology. To start, I will be subsisting largely on the low-hanging fruit of law and AI topics (my first post, tomorrow, is about hallucinations), but the task of sketching out this blog has already given me some interesting research ideas I never would have anticipated. I hope that we'll both be surprised by what I come up with.
For the nerds.
You can find me on GitHub here. As mentioned, I code mostly in Python, but also know my way around the command line and Docker, have "learned" a little Rust and have played around with building a web server. I am really diving in head-first into LLMs these days, but I like working with data generally, including sports stats, legal data, and geospatial data related to housing especially.
About the name.
In computers, there is a well-known command called "make" that can compile and execute many interdependent programs through a simple custom text file called a "makefile." In law, it's common to say that someone (a lawyer, a judge) "make[s] law" when they use legal reasoning to create new and unprecedented legal doctrine.