I'm still working on the next post in the hallucinations series whic I hope to publish next week. In the meantime, I'm going to talk about email.
Specifically, I recently finished reading A World Without Email by Cal Newport, published in 2021. Newport, a computer science professor at Georgetown, is probably most well-known for Digital Minimalism, a 2018 book where he argues that our contemporary digital lifestyle, especially our smart phone and social media use, is bad for us. A World Without Email is effectively the workplace version of Digital Minimalism. Although much of it is about email, the technology, it's more generally about how workplace digital communication—whether email, Slack, Zoom—makes knowledge workers less happy and less productive through a phenomenon he calls the "hyperactive hive mind."
Newport is a good writer, and his books are refreshingly short. He also writes in a practical style, making the case for why X is bad and how you can change it by doing Y. Thus, I think it’s easy to categorize him as "self-help" author, full of tips and tricks to improve your life in incremental ways. In a closer reading though, he's really advancing deep philosophical arguments. About technology and ethics, but also in this book about the meaning of "knowledge work" and the organizational theory (or lack thereof) behind it.
I've been familiar with Newport since I listened to interviews with him when he published Digital Minimalism. However, up until late 2024, I had only read his most recent book, Slow Productivity. For personal reasons, I had decided to check out Digital Minimalism and implement his "digital detox" approach, which inspired me to then read A World Without Email. So I've been immersed in Newport's ideas for a few months now. Since lawyers are knowledge workers, and lawyers love sending email, I thought I'd share a few thoughts and ideas that came up in the course of my reading.
Agency
Another reason the books read like self-help books is that Newport is earnest. He personally does not use social media, and he writes in a style that feels quaint, without the inflection of irony that more "plugged in" writers seem to have.
With reflection, I've noticed that this earnest style differentiates him from other "big tech" critics by creating a place for individual agency within his critique. He is critical: You don’t read his books and feel good about the state of large technology companies. But rather than attributing this to a crisis of capitalism or telling us the world is burning, he focuses on writing about things that you can do (or not do) to make your life better.
I did not fully appreciate this perspective until I tried reading a book afterwards, called How Data Happened. It's written by two Ivy League professors—a data scientist and an historian of science—and it purports to be a "sweeping history of data and its technical, political, and ethical impact on our world." From the publisher's description:
From facial recognition—capable of checking people into flights or identifying undocumented residents—to automated decision systems that inform who gets loans and who receives bail, each of us moves through a world determined by data-empowered algorithms. But these technologies didn’t just appear: they are part of a history that goes back centuries, from the census enshrined in the US Constitution to the birth of eugenics in Victorian Britain to the development of Google search.
As one might guess by the fact that I started this blog, I find this premise interesting and was interested especially in reading a "history of data." And, as a prior, I support the authors' premise that "data has long been used as a tool and a weapon in arguing for what is true, as well as a means of rearranging or defending power."
The book was exhausting to read and I put it down after three chapters. This wasn't because I suddenly lost interest in the source material, but largely because the authors went to great pains to constantly remind you that power and data are related (did you know that "statistics" was invented to collect data for the state??) and that this has infected the business model of Big Tech. I was annoyed partly because the book needed a better editor; too much time was spent telling me that this history applies to the present, and not enough time showing me. But I think I've also been Newport-pilled and I'm less willing to facially accept the premise of the statement that "our world is determined by data-empowered algorithms."
I don't have a grand closing thought on this point, but Newport has given me a healthy skepticism against both wildly optimist and pessimist takes about our AI future.
Law Firms and "Innovation"
Notably, Newport does not argue that email is inherently bad. Instead, he argues that email is a great communication tool—for the person sending the email. Because it's asynchronous (i.e., it doesn't require the recipient to answer immediately), can be sent to multiple recipients, and is not restricted to certain types of messages, email drastically reduces the cost of sending each individual message and that’s why it’s taken over.
The harm is that by lowering the cost of sending a message, everyone else receives more messages. Such messages arrive at all times and the recipient cannot prejudge its importance or salience, requiring workers need to effectively constantly monitor their inbox. This is a problem because monitoring new information through email is draining for the same psychological reasons that refreshing your social media feed is draining (a phenomenon known as context switching). This make knowledge workers less productive because they spend most of their limited "attention capital" answering email or Slack messages.
Newport analogizes this current state of knowledge work to car manufacturing prior to the invention of the assembly line, where each car was built sequentially, in one location. In both cases, a large amount of each worker’s “capital” is wasted, either because mechanics sit around waiting to build their part of the car, or because knowledge workers spend so much time responding to email.
Although we now think of the assembly line as obviously more efficient and productive, it was not viewed that way at the time. Most people—including workers themselves—were content with the "handcrafted" approach to building cars and Henry Ford took it upon himself to go in and disrupt the system. If we carry the analgoy through, for an innovative (that is, more economically productive) change to happen, leadership needs to commit to doing something that is, and will initially be perceived as more inconvenient by all individual workers. It is also risky: Ford almost went out of business on the way to making his changes.
I don’t think many people would argue that law firms are "innovative,” by and large. There are many cultural explanations for for this that are likely valid, whether it's a predilection for risk aversion, a culture of “adhering to precedent,” or simply meeting the demands of clients. But, to my mind, one under-theorized1 material explanation is the ethical restriction (outside of Arizona and Utah) on non-attorney ownership of law firms combined with the reality that firm revenue is atomized between individual partners (put differently, the "product" is not heavily differentiated from the person delivering the product). In Newport's account, innovation and productivity require extraordinary conviction by management in the face of financial and employee pressure, which is hard to deliver without the ability to raise new capital or consolidate the capital already in place.
I don’t have a pat answer to how to make law firms more productive, or whether that’s an appropriate goal. But the mere existence of potentially productivity-enhancing software (whether email, AI, or something else) does not magically make a workplace more productive and may have the opposite effect.
Work v. Workflows
Newport argues that the hard way out requires a redesign of workflows, not the work itself:
Knowledge work is better understood as the combination of two components: work execution and workflow. The first component, work execution, describes the act of actually executing the underlying value-producing activities of knowledge work--the programmer coding, the publicist writing the press release. It's how you generate value from attention capital.
The second component, workflow, is one we defined in the introduction of this book. It describes how these fundamental activities are identified, assigned, coordinated, and reviewed. The hyperactive hive mind is a workflow, as is Devesh's project board system. If work execution is what generates value, then workflows are what structure these efforts.
Once we understand that these components describe two different things, we find a way to escape the autonomy trap. When [management theorist Peter] Drucker emphasized autonomy, he was thinking about work execution, as these activities are often too complicated to be decomposed into rote procedures. Workflows, on the other hand, should not be left to individuals to figure out on their own, as the most effective systems are unlikely to arise naturally. They need instead to be explicitly identified as part of an organization's operating procedures.
If I manage a development team, I shouldn't tell my computer programmers how to write specific routines. I should, however, think a lot about how many routines they're asked to write, how these tasks are tracked, how we manage the code base, and even who else in the organization is allowed to bother them, and so on.
The book isn't about AI specifically, or at all, but I think this distinction is helpful for thinking about what purpose we want any technology to serve in a professional work environment. Sam Harden, a legal aid attorney who writes about technology, expressed the sentiment well at his blog:
Why are we trying to automate interesting legal research and not the boring stuff that lawyers have to do? In fact, two out of three of the audience questions after the sales pitch were “how would I use I to automate [boring thing] that I have to do inside [matter management system].” I’d like to see a half-day on how to automate boring stuff with AI, but I am a very dumb person.
Anecdotally, in my short time working on this blog, I've tried working with AI tools a few times as part of the writing process—not for initial drafting, but as a way of editing and revising—and it it doesn't click. I don't have any moral quandary, but at this point I'm not waiting expectantly for new improvements to the models. This is because the problem is not model quality, but that writing itself helps me think. 99 percent of my ideas may not reach the page, but those thoughts may turn into an idea for something else, lead me to look up a book or paper, etc. My capital, as it were, is the thoughts in my head, and actually sitting down to write is the only surefire way to produce more thoughts.
However, to make use of those thoughts, I turn them into notes. I also stockpile books and articles that I've read and plan to read. I am very interested in using language learning models to optimize my process—my workflow—for storing and accessing the information in that pile. My bet is that the truly useful AI applications in knowledge work are going to help make sense of the information that underpins knowledge work, rather than the creative act that makes the job valuable in the first place.
I am planning a deeper dive on this issue at some point, but as a case in point take this 2022 debate in the Yale Law Journal about the future of Nonlawyer Participation. Neither side, to my mind, persusasively establishes what “innovation” even is, much less whether the current market restrictions are sufficient to support it.
"Sweeping history" is truly a warning sign.