The Fuzzy Evidence of Law’s AI Revolution

John Bliss and Johanna Schandera, 11/30/24

Today is ChatGPT’s second birthday—marking two years since its debut brought generative AI into the mainstream and sparked a frenzied race for legal applications. Around its first birthday, a year ago, surveys of the legal profession showed relatively slow adoption (with only 15% of lawyers using generative AI according to one study), but the majority (73% according to another study) indicated they were planning to adopt this technology within the next year. So, how did those predictions pan out? Have lawyers embraced AI at the rates they predicted? And what have they discovered about its real-world benefits and limitations?

This post reviews three recent surveys on the legal profession’s use of generative AI. In general, the predictions appear to be spot on: adoption has surged, with two studies finding 70-74% of legal professionals using generative AI—exactly what was predicted a year ago. Many lawyers report dramatic time savings and other benefits, yet concerns about accuracy and security remain widespread—leaving the profession highly uncertain about where exactly AI fits into legal practice.

That’s the picture painted by these surveys—though the evidence is far from conclusive.

The Surveys and Their Limitations

The surveys were conducted by legal technology companies. We’ll consider the Everlaw survey of in-house legal counsel released last month (October 2024), as well as two studies that share the title, “State of AI in Legal,” one from Litify (released August 2024) and the other from Ironclad (April) covering a wide range of legal professionals in law firms and in-house roles.

There is a great deal of noise and ambiguity in this data. Only the in-house survey is consistent about its focus on generative AI (text-generating systems based on large language models), while the other two surveys leave ambiguous whether they’re asking about generative AI or other AI technologies (though they seem to imply generative AI).

None of the reports offer a high standard of methodological transparency, despite assurances about study design and procedures. Two had substantial sample sizes (Everlaw n=475, Ironclad n=800), while Litify didn’t disclose their sample size at all. Crucial details are missing in all three reports—including response rates, how participants were recruited, and how they represent the broader legal community. More concerning still, the reports don’t specify who exactly they surveyed, using the catch-all term “legal professionals” without distinguishing between licensed attorneys and other legal roles like paralegals, operations specialists, and support staff.

Despite these limitations, some findings are striking enough to warrant attention—even if they only sketch a rough window into the field.

Adoption Rates

Generally, the surveys find high rates of AI use. The Ironclad study found 74% of legal professionals using AI in their legal work, with nearly three out of four law firms permitting this (74%) and nearly all in-house offices (96%). The Everlaw in-house survey found similarly high rates, with 70% of respondents using AI at least once per week, while 22% were planning to begin using it soon. The Litify study found a more modest 47% adoption rate, though the report’s authors note this is double the rate from a year earlier. All together, these numbers suggest a dramatic increase in AI use over the past year.

Benefits

For many users, generative AI appears to be speeding up legal work. In the Litify survey, 92% of AI users report they are saving time, with 20% of these respondents saving 11-15 hours per week. They cite efficiencies in a wide range of tasks including reviewing and summarizing documents and drafting. The Everlaw in-house survey also found a high rate of time-saving, with 86% of respondents reporting faster task completion—the top benefit cited in the survey. The Ironclad survey found a lower rate of benefits with 49% saying they were saving time during their work day, and nearly as many (41%) reporting they were offloading mundane tasks.

Concerns

Accuracy of AI outputs remains a leading concern, with 40% of Ironclad respondents expressing doubts about AI tools’ reliability. This connects to broader ethical worries—73% of in-house respondents feared lawyers might over-rely on AI for legal guidance.

Security and data privacy form another major cluster of concerns. Half of Ironclad respondents and 72% of in-house respondents highlighted these issues. The Litify survey reinforces this theme: among non-AI users, security, privacy, and trustworthiness topped the list of adoption barriers.

Looking Ahead

Just like a year ago, today’s legal professionals predict that generative AI will continue rapidly proliferating in the profession. The Ironclad survey found 90% of current AI users plan to increase their usage this year. Some of the initial concerns about job security seem to be easing. The proportion of Litify respondents worried about AI’s negative impact on employment dropped from 44% last year to 28% last month. And many expect the benefits of legal AI to continue expanding: In the Ironclad survey, 57% believe AI will increasingly alleviate work dissatisfaction, emphasizing relief from mundane tasks. However, anxieties about the future persist. Among in-house respondents, a significant number (though fewer than half) worry that more advanced AI will degrade their legal skillset (38%) and their control over their work (41%).

These numbers suggest a profession caught between enthusiasm and apprehension, with the scales tipping toward embracing rather than resisting generative AI—though we should view all these survey findings skeptically, given their methodological limitations and lack of transparency about sampling and analysis.